Data Diffusion Dynamic Resource Provision and
Data Diffusion Dynamic Resource Provision and
ABSTRACT
Data intensive applications often involve the analysis of large datasets that require large amounts of compute and storage resources. While dedicated compute and/or storage farms offer good task/data throughput, they suffer low resource utilization problem under varying workloads conditions. If we instead move such data to distributed computing resources, then we incur expensive data transfer cost. In this paper, we propose a data diffusion approach that combines dynamic resource provisioning, ondemand data replication and caching, and data locality-aware scheduling to achieve improved resource efficiency under varying workloads. We define an abstract data diffusion model that takes into consideration the workload characteristics, data accessing cost, application throughput and resource utilization; we validate the model using a real-world large-scale astronomy application. Our results show that data diffusion can increase the performance index by as much as 34X, and improve application response time by over 506X, while achieving near-optimal throughputs and execution times. Keywords: Dynamic resource provisioning, data diffusion, data caching, data management, data-aware scheduling, data-intensive applications, Grid, Falkon
1. INTRODUCTION
The ability to analyze large quantities of data has become increasingly important in many fields. To achieve rapid turnaround, data may be distributed over hundreds of computers. In such circumstances, data locality has been shown to be crucial to the successful and efficient use of large distributed systems for dataintensive applications [7, 29]. One approach to achieving data localityadopted, for example, by Google [3, 10]is to build large computestorage farms dedicated to storing data and responding to user requests for processing. However, such approaches can be expensive (in terms of idle resources) if load varies significantly over the two dimensions of time and/or the data of interest.
We previously outline [31] an alternative data diffusion approach, in which resources required for data analysis are acquired dynamically, in response to demand. Resources may be acquired either locally or remotely; their location only matters in terms of associated cost tradeoffs. Both data and applications are copied (they diffuse) to newly acquired resources for processing. Acquired resources (computers and storage) and the data that they hold can be cached for some time, thus allowing more rapid responses to subsequent requests. If demand drops, resources can be released, allowing their use for other purposes. Thus, data diffuses over an increasing number of CPUs as demand increases, and then contracting as load reduces. We have implemented the data diffusion concept in Falkon, a Fast and Light-weight tasK executiON framework [4, 11]. Data diffusion involves a combination of dynamic resource provisioning, data caching, and data-aware scheduling. The approach is reminiscent of cooperative caching [16], cooperative web-caching [17], and peerto-peer storage systems [15]. (Other data-aware scheduling approaches tend to assume static resources [1, 2].) However, in our approach we need to acquire dynamically not only storage resources but also computing resources. In addition, datasets may be terabytes in size and data access is for analysis (not retrieval). Further complicating the situation is our limited knowledge of workloads, which may involve many different applications. In principle, data diffusion can provide the benefit of dedicated hardware without the associated high costs. It can also overcome inefficiencies that arise when executing data-intensive applications in distributed environments, due to high costs of data movement [29]: if workloads have sufficient internal locality of reference [20], then it is feasible to acquire and use remote resources despite high initial data movement costs. The performance achieved with data diffusion depends crucially on the characteristics of application workloads and the underlying infrastructure. As a first
Page 1 of 16
step towards quantifying these dependences, one of our previous studies [31] conducted experiments with both micro-benchmarks and a large scale astronomy application, and showed that data diffusion improves performance relative to alternative approaches, and provides improved scalability and aggregated I/O bandwidth. Our previous results did not consider the dynamic resource provisioning aspect of data diffusion. This papers focus is to explore the effects of provisioning on application performance, a central theme in data diffusion. We also introduce here an abstract model that formally defines data diffusion, and which can be used to study its effects in different scenarios at a theoretical level. Finally, we perform a preliminary model validation study on results from a real largescale astronomy application. [6, 31]
2. RELATED WORK
The results presented here build on our past work on resource provisioning [11] and task dispatching [4], and data diffusion [23, 31]. This section is partitioned in two, first covering related work in resource provisioning (i.e. multi-level scheduling) and then data management. Multi-level scheduling has been applied at the OS level [27, 30] to provide faster scheduling for groups of tasks for a specific user or purpose by employing an overlay that does lightweight scheduling within a heavierweight container of resources: e.g., threads within a process or pre-allocated thread group. Frey et al. pioneered the application of this principle to clusters via their work on Condor glide-ins [35]. Requests to a batch scheduler (submitted, for example, via Globus GRAM4 [34]) create Condor startd processes, which then register with a Condor resource manager that runs independently of the batch scheduler. Others have also used this technique. For example, Mehta et al. [38] embed a Condor pool in a batch-scheduled cluster, while MyCluster [36] creates personal clusters running Condor or SGE. Such virtual clusters can be dedicated to a single workload. Thus, Singh et al. find, in a simulation study [37], a reduction of about 50% in completion time, due to reduction in queue wait time. However, because they rely on heavyweight schedulers to dispatch work to the virtual cluster, the per-task dispatch time remains high. In a different space, Bresnahan et al. [41] describe a multi-level scheduling architecture specialized for the dynamic allocation of compute cluster bandwidth. A modified Globus GridFTP server varies the number of GridFTP data movers as server load changes. Appleby et al. [39] were one of several groups to explore dynamic resource provisioning within a data center. Ramakrishnan et al. [40] also address adaptive
resource provisioning with a focus primarily on resource sharing and container level resource management. Our work differs in its focus on resource provisioning on non-dedicated resources managed by local resource managers (LRMs). Shifting our focus to data management, we believe coupling it with resource management will be most effective. Ranganathan et al. used simulation studies [9] to show that proactive data replication can improve application performance. The Stork [25] scheduler seeks to improve performance and reliability when batch scheduling by explicitly scheduling data placement operations. However, while Stork can be used with other system components to co-schedule CPU and storage resources, there is no attempt to retain nodes between tasks as in our work. The GFarm team implemented a data-aware scheduler in Gfarm using an LSF scheduler plugin [1, 21]. Their performance results are for a small system (6 nodes, 300 jobs, 900 MB input files, 2640 second workload without data-aware scheduling, 1650 seconds with data-aware scheduling, 0.10.2 jobs/sec, 90MB/s to 180MB/s data rates); it is not clear that it scales to larger systems. In contrast, we have tested our proposed data diffusion with 75 nodes, 250K jobs, input data ranging from 1B to 1GB, workflows exceeding 1000 jobs/sec, and data rates exceeding 8750 MB/s. [31] BigTable [19], Google File System (GFS) [3], and MapReduce [10] (as well as Hadoop [24]) couple data and computing resources to accelerate data-intensive applications. However, these systems all assume a static set of resources. Furthermore, the tight coupling of execution engine (MapReduce, Hadoop) and file system (GFS) means that applications that want to use these tools must be modified. In our work, we further extend this fusion of data and compute resource management by also enabling dynamic resource provisioning, which we assert can provide performance advantages when workload characteristics change over time. In addition, because we perform data movement prior to task execution, we are able to run applications unmodified. The batch-aware distributed file system (BAD-FS) [26] caches data transferred from centralized data storage servers to local disks. However, it uses a capacityaware scheduler which is differentiated from a dataaware scheduler by its focus on ensuring that jobs have enough capacity to execute, rather than on placing jobs to minimize cache-to-cache transfers. We expect BADFS to produce more local area traffic than data diffusion. Although BAD-FS addresses dynamic deployment via multi-level scheduling, it does not address dynamic reconfiguration during the lifetime of the deployment, a key feature offered in Falkon, and
Page 2 of 16
triggers the dynamic resource provisioning to allocate resources via GRAM4 [34] from the available set of resources, which in turn allocates the resources and bootstraps the executors on the remote machines. The black dotted lines represent the scheduler sending the task to the compute nodes, along with the necessary information about where to find input data. The red thick solid lines represent the ability for each executor to get data from remote persistent storage. The blue thin solid lines represent the ability for each storage resource to obtain cached data from another peer executor. The current implementation runs a GridFTP server [30] alongside each executor, which allows other executors to read data from its cache.
Figure 1: Architecture overview of Falkon extended with data diffusion (data management and data-aware scheduler)
We assume that data is not modified after initial creation, an assumption that we found to be true for many data analysis applications. Thus, we can avoid complicated and expensive cache coherence schemes. We implement four well-known cache eviction policies [16]: Random, FIFO (First In First Out), LRU (Least Recently Used), and LFU (Least Frequently Used). The experiments in this paper all use LRU; we will study the effects of other policies in future work.
Page 3 of 16
concerning the location of data objects needed by the task. Thus, the executor must fetch all data needed by a task from persistent storage on every access. This policy is used for all experiments that do not use data diffusion. The max-cache-hit policy uses information about data location to dispatch each task to the executor with the largest number of data needed by that task. If that executor is busy, task dispatch is delayed until the executor becomes available. This strategy can be expected to reduce data movement operations compared to first-cache-available and max-computeutil, but may lead to load imbalances where CPU utilization will be sub optimal, especially if data popularity is not uniform or nodes frequently join and leave (i.e. this is the case for dynamic resource provisioning under varying loads). This policy is most suitable for data-intensive workloads. The max-compute-util policy also leverages data location information. This policy attempts to maximize the resource utilization even at the potential higher cost of data movement. It always sends a task to an available executor, but if there are several candidates, it chooses the one that has the most data needed by the task. This policy is most suitable for compute-intensive workloads. We believe that a combination of policy (3) and (4) will lead to good results in practice, as we also show in the performance evaluation in this paper. We have two heuristics to combine these two policies, into a new policy called good-cache-compute, which attempts to strike a good balance between these two policies. The first heuristic is based on the CPU utilization, which sets a threshold to decide when to use policy (3) and when to use policy (4). A value of 90% works well in practice as it keeps CPU utilization above 90% and it gives the scheduler some flexibility to improve the cache hit rates significantly when compared to the max-compute-util policy (which has strict goals to achieve 100% CPU utilization). The second heuristic is the maximum replication factor, which will determine how efficient the cache space utilization will be. To aid in explaining the scheduling algorithm, we first define several variables: Q wait queue Ti task at position i in the wait queue; position 0 is the head and position n is the tail Eset executor sorted set; element existence indicates that the executor is registered and in one of three states: free, busy, or pending Imap file index hash map; the map key is the file logical name and the value is an executor sorted set of where the file is cached
Emap executor hash map; the map key is the executor name, and the value is a sorted set of logical file names that are cached at the respective executor W scheduling window of tasks to consider from the wait queue when making the scheduling decision The scheduler is separated into two parts, one that sends out a notification, and another that actually decides what task to assign to what executor at the time of work dispatch. The first part of the scheduler takes input a task, and attempts to find the best executor that is free, and notify it that there is work available for pick-up. The pseudo code for this first part is:
while (Q !empty) for (all files in T0) tempSet = Imap(filei) for (all executors in tempSet) candidates[tempSetj]++ sort candidates[] according to values for all candidates if Eset(candidatei) = freeState Mark executor candidatei as pending Remove T0 from wait queue and mark as pending sendNotificatoin to candidatei to pick up T0 break If no candidate is found in the freeState send notification to the next free executor
Once an executor receives a notification to pick up a task, assuming it tries to pick up more than one task, the scheduler is invoked again, but this time trying to optimize the lookup given an executor name, rather than a task description. The scheduler then takes the scheduling window size, and starts to build a per task scoring cache hit function. If at any time, a task is found that produces 100% cache hit local rates, the scheduler removes this task from the wait queue and adds it to the list of tasks to dispatch to this executor. This is repeated until the maximum number of tasks were retrieved and prepared to be sent to the executor. If the entire scheduling window is exhausted and no task was found with a cache hit local rate of 100%, the m tasks with the highest cache hit local rates are dispatched. For the max-compute-util policy, if no tasks were found that would yield any cache hit rates, then the top m tasks are taken from the wait queue and dispatched to the executor. For the max-cache-hit policy, no tasks a returned, signaling that the executor is to return to the free pool of executors. For the good-cache-compute policy, the CPU utilization at the time of scheduling decision will determine which action to take. The CPU utilization is computed by dividing the number of busy nodes with the number of all registered nodes. The pseudo code for the second part is:
Page 4 of 16
while (tasksInspected < W) fileSeti = all files in Ti cacheHiti = |intersection fileSeti and Emap(executor)| depending on cacheHiti and CPU utilization, keep or discard keep: remove Ti from Q and add Ti to list to dispatch discard: do nothing if list of tasks to dispatch is long enough break
The schedulers complexity varies with the policy used. For the first-available policy, it is O(1) costs, as it simply takes the first available executor and sends a notification, and dispatches the first task in the queue. The max-cache-hit, max-compute-util, and goodcache-compute policies are more complex with a complexity of O(|Ti| + replicationFactor + min(|Q|, W)). This could equate to 1000s of operations for a single scheduling decision in a worst case, depending on the maximum size of the scheduling window and wait queue length. However, since all data structures used to keep track of executors and files are using hash maps and sorted sets, performing many in-memory operations is quite efficient. Section 5.1 investigated the raw performance of the scheduler under various policies, and we have measured the schedulers ability to perform 1322 to 1666 scheduling decisions per second for policies (3), (4) and (5) with a maximum window size of 3200.
resides on a set of persistent data stores, , where ||1. The set of transient data stores T, where || 0, are smaller than the persistent data stores and are only capable of storing a fraction of the persistent data stores data objects. We assume that the transient data stores T are co-located with compute resources, hence yielding a lower latency data path than the persistent data stores. Data Objects: ( ) represents the data objects found in the persistent data store , where . Similarly, ( ) represents a transient data stores locally cached data objects. The set of persistent data stores consists of a set of all data objects, . For each data object , ( ) denotes the data objects size and ( ) denotes the data objects storage location(s). Store Capacity: For each persistent data store, , and transient data store , ( ) and ( ) denote the persistent and transient data stores capacity. Compute Speed: For each transient resource, , ( ) denotes the compute speed. Load: For any data store, we define load as the number of concurrent read/write requests; ( ) and ( ) denote the load on data stores and . Ideal Bandwidth: For any persistent data store , and transient data store , ( ) and ( ) denote the ideal bandwidth for the persistent and transient data store, respectively. These transient data stores will have limited availability, and the bandwidth is lesser than that of the persistent data stores, ( ) < ( ) . We assume there are few high capacity persistent data stores and many low capacity transient data stores, such as ( ) ( ) , given that | |>>| | .
4. ABSTRACT MODEL
We define an abstract model for data-centric task farms as a common parallel pattern that drives the independent computational tasks, taking into consideration the data locality in order to optimize the performance of the analysis of large datasets. The datacentric task farm model is the mirror image of our practical realization in Falkon with its dynamic resource provisioning capabilities and support for data diffusion. Just as Falkon has been used successfully in many domains and applications, we believe our datacentric task farm model generalizes and is applicable to many different domains as well. We claim that the model could help study these concepts of dynamic resource provisioning and data diffusion with greater ease to determine an application end-to-end performance improvements, resource utilization, improved efficiency, and improved scalability. By formally defining this model, we aim for the data diffusion concept to live beyond its practical realization in Falkon. More information on data-centric task farms can be found in a technical report [27].
Available Bandwidth: For any persistent data store , and transient data store , we define available bandwidth as a function of ideal bandwidth and load; more formally, ( ( ), ( ) ) and
( ( ), ( ) ) will denote the available bandwidth for the persistent and transient data store, respectively. The relationship between the ideal and available bandwidth is given by the following formula: ( ( ), ( ) ) < ( ) , for ( ) 1 and ( ( ), ( ) ) = ( ) , for ( ) = 0 .
Copy Time: For any data object and transient data store , we define the time to copy a data object between the object to by the function
, (1 ), 1 , where 1, 1 ( , ) = 1 1 , (1 ), 1 \
denotes the source and destination data stores for the copy operation. In an ideal case, 1 can be
Page 5 of 16
represent the source and destination ideal bandwidth, respectively, and ( ) represents the data objects size; the same definition applies to copy a data object from 1 . In reality, this is an oversimplification since copy time ( , ) is dependent on other factors such as the load ( ) on some storage resource, the latency between the source and destination, and the error rates encountered during the transmission. Assuming low error rates and low latency, the copy time is then affected only by the data objects size and the available bandwidth ( ( ), ( ) ) as defined above. More formally, 1 is defined as
min[ ( (1 ), (1 ) ), ( ( ), ( ) )] . ( )
Tasks: Let denote the incoming stream of tasks. For each task , let () denote the time needed to execute the task on the computational resource ; let ( ) denote the set of data objects that the task requires, ( ) ; let o() denote the time to dispatch the task and return a result. Computational Resource State: If a compute resource is computing a task, then it is in the busy state, denoted by b; otherwise, it is in the free state, f. Let Tb denote the set of all compute resources in the busy state, and Tf the set of all compute resources in the free state; these two sets have the following property: b U f = .
also define a resource release policy that decides when to release some acquired resources. Each incoming task is dispatched to a transient resource , selected according to the dispatch policy. We define five dispatch policies: 1) firstavailable, 2) first-cache-available, 3) max-cache-hit, 4) max-compute-util, and 5) good-cache-compute. We focus on policy (3) and policy (4) as we already covered the other policies in Section 3.2. The max-cache-hit policy uses information about data location to dispatch each task to executor that yield the highest cache hits. If no preferred executors are free, task dispatch is delayed until a preferred executor becomes available. This policy aims at maximizing the cache hit/miss ratio; a cache hit occurs when a transient compute resource has the needed data on the same transient data store, and a cache miss occurs when the needed data is not the same computational resources data store. Formally, we define a cache hit as follows:
( ), , such that ( ) .
( ) , such that , ( ) .
Let Ch ( ) denote the set of all cache hits, and Cm ( ) denote the set of all cache misses for task , such that Ch ( ) U Cm ( ) = ( ) . We define the maxcache-hit dispatch policy as follows:
Cm ( )
Ch ( ) . max
The max-compute-util policy also leverages data location information, but in a different way. It always sends a task to an available executor, but if several workers are available, it selects that one that has the most data needed by the task. This policy aims to maximize computational resource utilization. We define a free cache hit as follows:
( ), f , such that ( ) . Similarly, we define a free cache miss as follows: , ( ) or b , such that ( ) . Let C f , h ( ) denote the set of all free cache hits, and C f , m ( ) denote the set of all free cache misses for task
such
that
C f , h ( ) Ch ( )
and
The good-cache-compute policy is the combination of max-compute-util and the max-cache-hit policy, which
Page 6 of 16
attempts to strike a good balance between the two policies. This policy was discussed in Section 3.2.
1 , where Y W = max | |, * | |
execution
1 [ ( ) + o( )], ( ), . | | Y = 1 [ ( ) + o( ) + ( , )], ( ), | |
Efficiency: We define the efficiency, , of a particular workload as = V . The expanded version of efficiency
is
Y 1 1, . |T | A E= B || Y 1 max , > , Y *Y | T | A
We claim that for the caching mechanisms to be effective, the aggregate capacity of our transient storage resources must be greater than our workloads working set, , size; formally ( ) | |
. We claim that we can obtain > 0.5 if ( ) > ( ) + ( , ) , where ( ) , ( ) , ( , ) are the time to execute and dispatch the task , and copy the object to , respectively.
Speedup: We define the speedup, S , of a particular workload as S = E* | T | . Optimizing Efficiency: Having defined both efficiency and speedup, it is possible to maximize for either one. We optimize efficiency by finding the smallest number of transient compute/storage resources | | while maximizing speedup*efficiency.
Average Task Execution Time: We define the average task execution time, , as the summation of all the task execution times divided by the number of tasks; more formally, we have = 1 ( ) .
| | k
Computational Intensity: Let denote the arrival rate of tasks; we define the computational intensity, , as follows: = * . If = 1 , then all nodes are fully utilized; if > 1 , tasks are arriving faster than they can be executed; finally, if < 1 , it indicates idle nodes. Workload Execution Time: We define the workload V , of our system as execution time, B 1 . V = max , * | |
| |
Workload Execution Time with Overhead: In general, the total execution time for a task includes overheads, which reduced efficiency by a
( )
W,
of
our
system
as
Page 7 of 16
highest locality of 30 (i.e., each file contained 30 objects). Figure 2 (left) shows the model error for experiments that varied the number of CPUs from 2 to 128 with locality of 1, 1.38, and 30. Note that each model error point represents a workload that spanned 111K, 154K, and 23K tasks for data locality 1, 1.38, and 30 respectively. The second set of results (Figure 2 - right) fixed the number of CPUs at 128, and varied the data locality from 1 to 30. The results show a larger model error with an average of 8% and a standard deviation of 5%. We attribute the model errors to contention in the shared file system and network resources that are only captured simplistically in the current model.
100% 90% 80% 70% Model Error 60% 50% 40% 30% 20% 10% 0% 2 4 8 16 32 Number of CPUs 64 128 GPFS (GZ) GPFS (FIT) Data Diffusion (FIT) - Locality 1 Data Diffusion (GZ) - Locality 1 Data Diffusion (FIT) - Locality 1.38 Data Diffusion (GZ) - Locality 1.38 Data Diffusion (FIT) - Locality 30 Data Diffusion (GZ) - Locality 30
100% 90% 80% 70% Model Error 60% 50% 40% 30% 20% 10% 0% 1 1.38 2 3 4 5 Data Locality 10 20 30 GPFS (GZ) GPFS (FIT) Data Diffusion (FIT) Data Diffusion (GZ)
5. EMPIRICAL EVALUATION
We conducted several experiments to understand the performance and overhead of the data-aware scheduler, as well as to see the effect of dynamic resource provisioning and data diffusion. The experiments ran on the ANL/UC TeraGrid [18, 22] site using 64 nodes. The Falkon service ran on gto.ci.uchicago.edu (8-core Xeon @ 2.33GHz per core, 2GB RAM, Java 1.6) with 2 ms latency to the executor nodes. We performed a wide range of experiments that covered various scheduling policies and settings. In all experiments, the data is originally located on a GPFS [8] shared file system with sub 1ms latency. We investigated the performance of 4 policies: 1) firstavailable, 2) max-cache-hit, 3) max-compute-util, and 4) good-cache-compute. In studying the effects of dynamic resource provisioning on data diffusion, we also investigated the effects of the cache size, by varying the per node cache size from 1GB, 1.5GB, 2GB, to 4GB.
5.1 Scheduler
In order to understand the performance of the dataaware scheduler, we developed several microbenchmarks to test scheduler performance. We used the first-available policy that performed no I/O as the baseline scheduler, and tested the various scheduling policies. We measured overall achieved throughput in terms of scheduling decisions per second and the breakdown of where time was spent inside the Falkon service. We conducted our experiments using 32 nodes; our workload consisted of 250K tasks, where each task accessed a random file (uniform distribution) from a dataset of 10K files of 1B in size each. We use files of 1 byte to measure the scheduling time and cache hit rates with minimal impact from the actual I/O performance of persistent storage and local disk. We compare the first-available policy using no I/O (sleep 0), first-available policy using GPFS, max-computeutil policy, max-cache-hit policy, and good-cachecompute policy. The scheduling window size was set to 100X the number of nodes, or 3200. We also used 0.8 as the CPU utilization threshold in the good-cachecompute policy to determine when to switch between the max-cache-hit and max-compute-util policies. Figure 3 shows the scheduler performance under different scheduling policies. We see the throughput in terms of scheduling decisions per second range between 2981/sec (for first-available without I/O) to as low as 1322/sec (for max-cache-hit).
The second set of results (Figure 2 - right) fixed the number of CPUs at 128, and varied the data locality from 1 to 30. The results here show a larger model error with an average of 8% and a standard deviation of 5%. We attribute the model errors to contention in the shared file system and network resources that are only captured simplistically in the current model. We also plan to do a thorough validation of the model through discrete-event simulations that will allow us to investigate a wider parameter space than we could in a real world implementation. Through simulations, we also hope to measure application performance in a more dynamic set of variables that arent bound to single static values, but could be complex functions inspired from real world systems and applications. The simulations will specifically attempt to model a Grid environment comprising of computational resources, storage resources, batch schedulers, various communication technologies, various types of applications, and workload models. We will perform careful and extensive empirical performance evaluations in order to create correct and accurate input models to the simulator; the input models include 1) Communication costs, 2) Data management costs, 3) Task scheduling costs, 4) Storage access costs, and 5) Workload models. The outputs from the simulations over the entire considered parameter space will form the datasets that will be used to statistically validate the model using analysis [33]
Page 8 of 16
firstfirstmax- max-cache- goodavailable available compute-util hit cachewithout I/O with I/O compute
Figure 3: Data-aware scheduler performance and code profiling for the various scheduling policies
It is worth pointing out that for the first-available policy, the cost of communication is significantly larger than the rest of the costs combined, including scheduling. The scheduling is quite inexpensive for this policy as it simply load balances across all workers. However, we see that with the 3 data-aware policies, the scheduling costs (red and light blue areas) are more significant.
5.2 Provisioning
The key contribution of this paper is the study of dynamic resource provisioning in the context of data diffusion, and how it performs for data intensive workloads. In choosing our workload, we set the I/O to compute ratio large (10MB of I/O to 10ms of compute). The dataset consists of 10K files, with 10MB per file. Each task reads one file chosen at random from the dataset, and computes for 10ms. The workload had an initial arrival rate of 1 task/sec, a multiplicative increasing function by 1.3, 60 seconds between increase intervals, and a maximum arrival rate of 1000 tasks/sec. The increasing function is Ai = min[ceiling( Ai 1 *1.3),1000],0 i < 24 , which varies arrival rate A from 1 to 1000 in 24 distinct intervals making up 250K tasks and spanning 1415 seconds to complete. This workload is both data-intensive and has good locality of reference, a good candidate to measure the impact of data diffusion and resource provisioning. Note that we needed a high I/O to compute ratio due to the small testbed we used (64 nodes). For example, if we were to set the ratio to a more balanced value, 1MB I/O and 1 second compute, having 64 dual processor nodes would achieve at most 128MB/s (1Gb/s). GPFS can sustain 4Gb/s+ of read rates, which would have meant that on our testbed, GPFS performance would have been sufficient. When we get access to a larger testbed and we scale up the experiments to 100s or 1000s of nodes, well be able to explore more balanced I/O to compute ratios while still requiring more throughput than shared file systems can deliver.
Page 9 of 16
Throughput (tasks/sec)
Task Submit Notification for Task Availability Task Dispatch (data-aware scheduler) Task Results (data-aware scheduler) Notification for Task Results WS Communication Throughput (tasks/sec)
5000
seconds, down from 5011 seconds for the firstavailable policy; the efficiency when compared to the ideal case is 38%.
1000 100 10 1 0.1 0.01 0.001 1 0.9 Number of Nodes Aggregate Throughput (Gb/s) Throughput per Node (MB/s) Queue Length (x1K) 0.8 Cache Hit/Miss % 0.7 0.6 0.5 0.4 0.3 0.2 0.1
0.1
0.01 0.001
00
00
00
00
00
00
00
00
00
00
00
00
00
30
60
90
12
15
18
21
24
27
30
33
36
39
42
45
48
Time (sec) Ideal Throughput (Gb/s) Throughput (Gb/s) Wait Queue Length Number of Nodes
51
00
30 0
60 0
90 0
00
00
00
00
00
00
00
00 33
Figure 4: Summary view of 250K tasks executed via the first-available policy directly on GPFS using dynamic resource provisioning
12
15
18
21
27
24
30
Time (sec)
Aggregate throughput matches the ideal throughput for arrival rates ranging between 1 and 59 tasks/sec, but the throughput remains flat at an average of 4.4Gb/s for greater arrival rates. At the transition point when the arrival rate increased beyond 59, the wait queue length also started growing beyond the relatively small values, to the eventual length of 198K tasks. The workload execution time was 5011 seconds, which yielded 28% efficiency (with ideal time being 1415 seconds). Figure 5-8 (similar to Figure 4) summarizes results for data diffusion with varying cache sizes per node (1GB, 1.5GB, 2GB, and 4GB) using the good-cache-compute policy; recall that this policy is a combination between the max-cache-hit and max-compute-util policy, which attempts to optimize the cache hit performance as long as processor utilization is high (80% in our case). The dataset originally resided on the GPFS shared file system, and was diffused to local disk caches with every cache miss (the red area in the graphs); cache hit global (file accesses from remote worker caches) rates are shown in yellow, while the cache hit local (file accesses satisfied from the local disk) rates are shown in green. Figure 5 is an interesting use case as it shows the performance of data diffusion when the working set does not fit in cache. In our case, the working set was 100GB, but the aggregate cache size was 64GB as we had 64 nodes at the peak of the experiment. Notice that throughput keeps up with the ideal throughput for a little longer than the first-available policy, up to 101 tasks/sec arrival rates. At this point, the throughput stabilizes at an average of 5.2Gb/s until 800 seconds later when the cache hit rates increase due to the working set caching reaching a steady state, when the throughput at an average of 6.9Gb/s. The overall cache hit rate was 31%, which in the end resulted in a 57% higher throughput than what the first-available policy was able to achieve using GPFS directly. Also, note that the workload execution time is reduced to 3762
Figure 5: Summary view of 250K tasks executed using data diffusion and good-cache-compute policy with 1GB caches per node and dynamic resource provisioning
Figure 6 increases the per node cache size from 1Gb to 1.5GB, which increases the aggregate cache size to 96GB, almost enough to hold the entire working set of 100GB.
1000 100 10 1 0.1 0.01 0.001
72 0 0 96 0 10 80 12 00 13 20 14 40 15 60 12 0 24 0 36 0 48 0 60 0 84 0
36
00
1 0.9
Number of Nodes Aggregate Throughput (Gb/s) Throughput per Node (MB/s) Queue Length (x1K)
0.8 Cache Hit/Miss % 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
Time (sec) Cache Hit Local % Ideal Throughput (Gb/s) Number of Nodes Cache Hit Global % Throughput (Gb/s) Cache Miss % Wait Queue Length
Figure 6: Summary view of 250K tasks executed using data diffusion and good-cache-compute policy with 1.5GB caches per node and dynamic resource provisioning
Notice that the throughput hangs on further to the ideal throughput, up to 132 tasks/sec when the throughput increase stops and stabilizes at an average of 6.3Gb/s. Within 350 seconds of this stabilization, the cache hit performance increased significantly from 25% cache hit rates to over 90% cache hit rates; this increase in cache hit rates also results in the throughput increase up to an average of 45.6Gb/s for the remainder of the experiment. Overall, it achieved 78% cache hit rates, 1% cache hit rates to remote caches, and 21% cache miss rates. Overall, the workload execution time was reduced drastically from the 1GB per node cache size, down to 1596 seconds; this yields a 89% efficiency when compared to the ideal case.
Page 10 of 16
Figure 7 increases the cache size further to 2GB per node, for a total of 128GB which was finally large enough to hold the entire working set of 100GB. We see the throughput is able to hold onto the ideal throughput quite well for the entire experiment. The great performance is attributed to the ability to cache the entire working set, and schedule tasks to the nodes that had the data cached approaching with cache hit rates of 98%. Its also interesting to note that the queue length never grew beyond 7K tasks long, which was quite a feat given that the other experiments so far (first-available policy, and good-cache-compute with 1GB and 1.5GB caches) all ended up with queues in the 91K to 200K tasks long. With an execution time of 1436 seconds, the efficiency was 99% of the ideal case.
1000 100 10 1 0.1 0.01 0.001
96 0 10 80 12 00 13 20 14 40 72 0 0 12 0 24 0 36 0 48 0 60 0 84 0
1 0.9
Number of Nodes Aggregate Throughput (Gb/s) Throughput per Node (MB/s) Queue Length (x1K)
0.8 Cache Hit/Miss % 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
and 6% cache misses. In order to show the need for the good-cache-compute policy (the previous results from Figure 5 through Figure 8), which is a combination of the max-cache-hit and max-compute-util policy, it is interesting to show the performance for each of these two policies. We fixed the cache size per node at 4GB in order to give both policies ample opportunity for good performance. Figure 9 shows the performance of the max-cache-hit policy which always schedules tasks according to where the data is cached, even if it has to wait for some node to become available, leaving some nodes processors idle. Notice a new metric measured (dotted thin black line), the CPU utilization, which shows clear poor CPU utilization that decreases with time as the scheduler has difficulty scheduling tasks to busy nodes; the average CPU utilization for the entire experiment was 43%.
1000 100 10 1 0.1 0.01 0.001
0 0 0 0 80 20 00 80 60 0 0 40 60 36 54 72 18 90 10 40 23 20 21 12 19 14 16 18 25 27 28 00 80
1 0.9
Number of Nodes Aggregate Throughput (Gb/s) Throughput per Node (MB/s) Queue Length (x1K)
0.8 Cache Hit/Miss % CPU Utilization % 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
Time (sec) Cache Hit Local % Ideal Throughput (Gb/s) Number of Nodes Cache Hit Global % Throughput (Gb/s) Cache Miss % Wait Queue Length
Figure 7: Summary view of 250K tasks executed using data diffusion and good-cache-compute policy with 2GB caches per node and dynamic resource provisioning
Time (sec)
Investigating if it helps to increase the cache size further to 4GB per node, we conduct the experiment whose results are found in Figure 8. We see no significant improvement in performance.
1000 100 10 1 0.1 0.01 0.001
0 0 0 0 0 0 0 80 0 00 0 20 12 24 36 48 60 72 84 96 12 10 13 14 40
Figure 9: Summary view of 250K tasks executed using data diffusion and max-cache-hit policy with 4GB caches per node and dynamic resource provisioning
1 0.9
0.8 Cache Hit/Miss % 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
Time (sec) Cache Hit Local % Ideal Throughput (Gb/s) Number of Nodes Cache Hit Global % Throughput (Gb/s)
Figure 8: Summary view of 250K tasks executed using data diffusion and good-cache-compute policy with 4GB caches per node and dynamic resource provisioning
The execution time is reduced slightly to 1427 seconds (99% efficient), and the overall cache hit rates are improved to 88% cache hit rates, 6% remote cache hits,
Its interesting to compare with the good-cachecompute policy which achieved good cache hit rates (88%) at the cost of only 4.5% idle CPUs. However, its important to point out that the goal of the policy to maximize the cache hit rates was met, as it achieved 94.5% cache hit rates and 5.5% cache miss rates. The workload execution time was a bit disappointing (but not surprising base on the CPU utilization) with 2888 seconds (49% of ideal). Our final experiment looked at the max-compute-util policy, which attempted to maximize the CPU utilization at the expense of data movement. We see the workload execution time is improved (compared to max-cache-hit) down to 2037 seconds (69% efficient), but it is still far from the good-cache-compute policy that achieved 1436 seconds. The major difference here is that the there are significantly more cache hits to remote caches as tasks got scheduled to nodes that didnt have the needed cached data due to being busy with other work. We were able to sustain high efficiency with arrival rates up to 380 tasks/sec, with
Page 11 of 16
Number of Nodes Aggregate Throughput (Gb/s) Throughput per Node (MB/s) Queue Length (x1K)
an average throughput for the steady part of the experiment of 14.5 Gb/s. It is interesting to see the cache hit local performance at time 1800~2000 second range spiked from 60% to 98%, which results in a spike in throughout from 14Gb/s to 40Gb/s. Although we maintained 100% CPU utilization, due to the extra costs of moving data from remote executors, the performance was worse than the good-cache-compute policy when 4.5% of the CPUs were left idle.
1000 100 Number of Nodes Throughput (Gb/s) Queue Length (x1K) 10 1 0.1 0.01 0.001
0 12 0 24 0 36 0 48 0 60 0 72 0 84 0 96 0 10 80 12 00 13 20 14 40 15 60 16 80 18 00 19 20 20 40
5.2.3 Throughput
Figure 12 compares the throughputs (broken down into three categories, local cache, remote cache, and GPFS) of all 7 experiments presented in Figure 4 through Figure 10, and how they compare to the ideal case. The first-available policy had the lowest average throughput of 4Gb/s, compared to between 5.3Gb/s and 13.9Gb/s for data diffusion, and 14.1Gb/s for the ideal case. In addition to having much higher average throughputs, data diffusion experiments also achieved significantly higher peak throughputs (the black bar): as high as 100Gb/s as opposed to 6Gb/s for the firstavailable policy.
100 Local Worker Caches (Gb/s) Remote Worker Caches (Gb/s) GPFS Throughput (Gb/s)
1 0.9 0.8 Cache Hit/Miss % 0.7 0.6 0.5 0.4 0.3 0.2
Throughput (Gb/s)
0.1 0
10
Time (sec) Cache Hit Local % Ideal Throughput (Gb/s) Number of Nodes Cache Hit Global % Throughput (Gb/s) Cache Miss % Wait Queue Length
Figure 10: Summary view of 250K tasks executed in a LAN using data diffusion and max-compute-util policy with 4GB caches per node and dynamic resource provisioning
0.1 Ideal firstavailable goodgoodgoodgoodmaxmaxcachecachecachecache- cache-hit, computecompute, compute, compute, compute, 4GB util, 4GB 1GB 1.5GB 2GB 4GB
Figure 12: Average and peak (99 percentile) throughput for both LAN and WAN
Note also that GPFS file system load (the red portion of the bars) is significantly lower with data diffusion than for the GPFS-only experiments; in the worst case, with 1GB caches where the working set did not fit in cache, the load on GPFS is still high with 3.6Gb/s due to all the cache misses, while GPFS-only tests had 4Gb/s load. However, as the cache sizes increased and the working set fit in cache, the load on GPFS reached as low as 0.4Gb/s. Even the network load due to remote cache access was considerably low, with the highest values of 1.5Gb/s for the max-compute-util policy. All other experiments had less than 1Gb/s network load due to remote cache access.
We see a clear separation in the cache miss rates (red) for the cases where the working set fit in cache (1.5GB and greater), and the case where it did not (1GB). For the 1GB case, the cache miss rate was 70%, which is to be expected considering only 70% of the working set fit in cache at most, and cache thrashing was hampering the schedulers ability to achieve better cache miss rates. The other extreme, the 4GB cache size cases, all achieved near perfect cache miss rates of 4%~5.5%.
Page 12 of 16
Figure 13 shows PI and speedup data. Notice that while both the good-cache-compute with 2GB and 4GB caches achieves the highest speedup of 3.5X, the 4GB case achieves a higher performance index of 1 as opposed to 0.7 for the 2GB case. This is due to the fact that fewer resources were used throughput the 4GB experiment, 17 CPU hours instead of 24 CPU hours for the 2GB case. This reduction in resource usage was due to the larger caches, which in turn allowed the system to perform better with fewer resources for longer durations, and hence the wait queue didnt grow as fast, which resulted in less aggressive resource allocation.
1 0.9 0.8 Performance Index 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
firstavailable goodgoodgoodgoodgoodmaxmaxcachecachecachecachecache- cache-hit, computecompute, compute, compute, compute, compute, 4GB util, 4GB 1GB 1.5GB 2GB 4GB 4GB, SRP
3.5
2.5
1.5
Figure 13: PI and speedup data for both LAN and WAN
slower than the ideal workload execution time; the ideal workload execution time assumes infinite resources and 0 cost communication, and is computed from the arrival rate function; SL=WETpolicy/WETideal; in our case, WETideal is 1415 seconds. These results in Figure 14 clearly show the arrival rates that could be handled by each approach, showing the first-available policy (the GPFS only case) to saturate the earliest at 59 tasks/sec denoted by the rising red line. It is evident that larger cache sizes allowed the saturation rates to be higher (essentially perfect for some cases, such as the good-cache-compute with 4GB caches). It interesting to point out the good-cachecompute policy with 1.5GB caches slowdown increase relatively early (similar to the 1GB case), but then towards the end of the experiment the slowdown is reduced from almost 5X back down to an almost ideal 1X. This sudden improvement in performance is attributed to a critical part of the working set being cached and the cache hit rates increasing significantly. Also, note the odd slowdown (as high as 2X) of the 4GB cache DRP case at arrival rates 11, 15, and 20; this slowdown matches up to the drop in throughput between time 360 and 480 seconds in Figure 10 (the detailed summary view of this experiment), which in turn occurred when an additional resource was allocated.
18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1
For comparisons, we also ran the best performing experiment (good-cache-compute with 4GB caches) without dynamic resource provisioning, in which case we allocated 64 nodes ahead of time outside the experiment measurement and maintained 64 nodes throughout the experiment. Notice the speedup is identical to that of using dynamic resource provisioning, we see the performance index is quite low (0.33) due to the additional CPU time that was consumed (46 CPU hours as opposed to 17 CPU hours for the dynamic resource provisioning case). Finally, notice the performance index of the first-available policy which uses GPFS solely; although the speedup gains with data diffusion compared to the firstavailable policy are relatively modest (1.3X to 3.5X), the performance index of data diffusion is much more, from at least 2X to as high as 34X.
first-available good-cache-compute, 1GB good-cache-compute, 1.5GB good-cache-compute, 2GB good-cache-compute, 4GB max-cache-hit, 4GB max-compute-util, 4GB
Slowdown
Figure 14: Slowdown for the LAN experiment as we varied arrival rate
5.2.5 Slowdown
Speedup compares data diffusion to the base case of the LAN GPFS, but does not tell us how well data diffusion performed in relation to the ideal case. Recall that the ideal case is computed from the arrival rate of tasks, assuming zero communication costs and infinite resources to handle tasks in parallel; in our case, the ideal workload execution time is 1415 seconds. Figure 14 shows the slowdown for the LAN experiments as a function of arrival rates. Slowdown (SL) measures the factor by which the workload execution times are
Page 13 of 16
It is important to note that resource allocation takes on the order of 30~60 seconds due to LRMs overheads, which is why it took the slowdown 120 seconds to return back to the normal (1X), as the dynamic resource provisioning compensated for the drop in performance.
1 2 3 4 6 8 11 15 20 26 34 45 59 77 10 1 13 2 17 2 22 4 29 2 38 0 49 4 64 3 83 10 6 00
Time (ART) is the end-to-end time from task submission to task completion notification; ART = WQT+ET+DT, where ART is the average response time, WQT is the wait queue time, ET is the task execution time, and DT is the delivery time to deliver the result. Figure 15 shows response time results across all 14 experiments in log scale. We see a significant different between the best data diffusion response time (3.1 seconds per task) to the worst data diffusion (1084 seconds) and the worst GPFS (1870 seconds).
10000
1569 1000
1084 287
10 3.4 1 firstavailable goodcachecompute, 1GB goodcachecompute, 1.5GB goodcachecompute, 2GB goodmax-cachemaxcachehit, 4GB computecompute, util, 4GB 4GB 3.1
That is over 500X difference between the data diffusion good-cache-compute policy and the firstavailable policy (GPFS only) response time. One of the main factors that influences the average response time is the time tasks spend in the Falkon wait queue. In the worst (first-available) case, the queue length grew to over 200K tasks as the allocated resources could not keep up with the arrival rate. In contrast, the best (good-cache-compute with 4GB caches) case only queued up 7K tasks at its peak. The ability of the data diffusion to keep the wait queue short allowed it to achieve an average response time of only 3.1 seconds.
6. CONCLUSIONS
Dynamic analysis of large datasets is becoming increasingly important in many domains. When building systems to perform such analyses, we face difficult tradeoffs. Do we dedicate computing and storage resources to analysis tasks, enabling rapid data access but wasting resources when analysis is not being performed? Or do we move data to compute resources, incurring potentially expensive data transfer costs? We describe here a data diffusion approach to this problem that seeks to combine elements of both dedicated and on-demand approaches. The key idea is that we respond to demands for data analysis by allocating data and compute systems and migrating code and data to those systems. We then retain these dynamically allocated resources (and cached code and data) for some time, so that if workloads feature data
locality, they will obtain the performance benefits of dedicated resources. To explore this approach, we have extended the Falkon dynamic resource provisioning and task dispatch system to cache data at executors and incorporate dataaware scheduling policies at the dispatcher. In this way, we leverage the performance advantages of highspeed local disk and reduce access to persistent storage. This paper has two contributions: 1) defining an abstract model for data diffusion and validating it against results from a real astronomy application; and 2) the exploration of the process of expanding a set of resources based on demand, and the impact it has on application performance. Our results show data diffusion offering dramatic improvements in performance achieved per resources used (34X) and that it reduces application response time by as much as 506X when compared with data-intensive benchmarks directly against a shared file system such as GPFS. In future work, we plan to explore more sophisticated algorithms that address, for example, what happens when an executor is released; should we discard cached data, should it be moved to another executor, or should it be moved to persistent storage; do cache eviction policies affect cache hit ratio performance? Answers to these and other related questions will presumably depend on workload and system characteristics. We plan to use the Swift parallel programming system to explore data diffusion performance with more applications and workloads. We have integrated Falkon into the Karajan workflow engine used by Swift [14, 28]. Thus, Karajan and Swift applications can use Falkon without modification. Swift has been applied to applications in the physical sciences, biological sciences, social sciences, humanities, computer science, and science education. We have already run large-scale applications (fMRI, Montage, MolDyn, DOCK, MARS) without data diffusion [4, 14, 28, 32], which we plan to pursue as use cases for data diffusion.
7. ACKNOWLEDGEMENTS
This work was supported in part by the NASA Ames Research Center GSRP Grant Number NNA06CB89H and by the Mathematical, Information, and Computational Sciences Division subprogram of the Office of Advanced Scientific Computing Research, Office of Science, U.S. Dept. of Energy, under Contract DE-AC02-06CH11357. We also thank TeraGrid and the Computation Institute at University of Chicago for hosting the experiments reported in this paper.
8. REFERENCES
[1] W. Xiaohui, et al. Implementing data aware scheduling in Gfarm using LSF scheduler plugin
Page 14 of 16
mechanism, 2005 International Conference on Grid Computing and Applications, pp.3-10, 2005 [2] P. Fuhrmann. dCache, the commodity cache, IEEE Mass Storage Systems and Technologies 2004 [3] S. Ghemawat, H. Gobioff, S.T. Leung. The Google file system, ACM SOSP 2003, pp. 29-43 [4] I. Raicu, Y. Zhao, C. Dumitrescu, I. Foster, M. Wilde. Falkon: a Fast and Light-weight tasK executiON framework, IEEE/ACM SC 2007 [5] I. Raicu, I. Foster, A. Szalay. Harnessing Grid Resources to Enable the Dynamic Analysis of Large Astronomy Datasets, IEEE/ACM SC 2006 [6] I. Raicu, I. Foster, A. Szalay, G. Turcu. AstroPortal: A Science Gateway for Large-scale Astronomy Data Analysis, TeraGrid Conf. 2006 [7] A. Szalay, J. Bunn, J. Gray, I. Foster, I. Raicu. The Importance of Data Locality in Distributed Computing Applications, NSF Workflow Workshop 2006 [8] F. Schmuck, R. Haskin, GPFS: A Shared-Disk File System for Large Computing Clusters, FAST 2002 [9] K. Ranganathan, I. Foster, Simulation Studies of Computation and Data Scheduling Algorithms for Data Grids, Journal of Grid Computing, 2003 [10] J. Dean, S. Ghemawat. MapReduce: Simplified Data Processing on Large Clusters, OSDI 2004 [11] I. Raicu, C. Dumitrescu, I. Foster. Dynamic Resource Provisioning in Grid Environments, TeraGrid Conference 2007 [12] G. Banga, et al. Resource Containers: A New Facility for Resource Management in Server Systems. USENIX OSDI 1999 [13] J.A. Stankovic, et al. The Spring System: Integrated Support for Complex Real-Time Systems, Real-Time Systems, 1999 [14] Y. Zhao, M. Hategan, B. Clifford, I. Foster, G. von Laszewski, I. Raicu, T. Stef-Praun, M. Wilde. Swift: Fast, Reliable, Loosely Coupled Parallel Computation, IEEE Workshop on Scientific Workflows 2007 [15] R. Hasan, et al. A Survey of Peer-to-Peer Storage Techniques for Distributed File Systems, ITCC 2005 [16] S. Podlipnig, L. Bszrmenyi. A survey of Web cache replacement strategies, ACM Computing Surveys, 2003 [17] R. Lancellotti, et al. A Scalable Architecture for Cooperative Web Caching, Workshop in Web Engineering, Networking 2002
[18] C. Catlett, et al. TeraGrid: Analysis of Organization, System Architecture, and Middleware Enabling New Types of Applications, HPC 2006 [19] F. Chang, et al. Bigtable: A Distributed Storage System for Structured Data, USENIX OSDI 2006 [20] I. Raicu, I. Foster. Characterizing Storage Resources Performance in Accessing the SDSS Dataset, Tech. Report, Univ of Chicago, 2006 [21] X. Wei, W.W. Li, O. Tatebe, G. Xu, L. Hu, and J. Ju. Integrating Local Job Scheduler LSF with Gfarm, Parallel and Distributed Processing and Applications, Springer Berlin, Vol. 3758/2005, pp 196-204, 2005 [22] ANL/UC TeraGrid Site Details, http://www.uc.teragrid.org/tg-docs/tg-techsum.html, 2007 [23] I. Raicu, Y. Zhao, I. Foster, A. Szalay. A Data Diffusion Approach to Large Scale Scientific Exploration, Microsoft eScience Workshop 2007 [24] A. Bialecki, M. Cafarella, D. Cutting, O. OMalley. Hadoop: a framework for running applications on large clusters built of commodity hardware, http://lucene.apache.org/hadoop/, 2005 [25] T. Kosar. A New Paradigm in Data Intensive Computing: Stork and the Data-Aware Schedulers, IEEE CLADE 2006 [26] J. Bent, D. Thain, et al. Explicit control in a batch-aware distributed file system. USENIX/ACM NSDI 2004 [27] I. Raicu. Harnessing Grid Resources with DataCentric Task Farms, Technical Report, University of Chicago, 2007 [28] Y. Zhao, I. Raicu, I. Foster, M. Hategan, V. Nefedova, M. Wilde. Realizing Fast, Scalable and Reliable Scientific Computations in Grid Environments, Grid Computing Research Progress, Nova Pub. 2008 [29] J. Gray. Distributed Computing Economics, Technical Report MSR-TR-2003-24, Microsoft Research, Microsoft Corporation, 2003 [30] W. Allcock, J. Bresnahan, R. Kettimuthu, M. Link, C. Dumitrescu, I. Raicu, I. Foster. The Globus Striped GridFTP Framework and Server, ACM/IEEE SC05, 2005 [31] I. Raicu, Y. Zhao, I. Foster, A. Szalay. "Accelerating Large-scale Data Exploration through Data Diffusion", ACM/IEEE Workshop on Data-Aware Distributed Computing 2008. [32] I. Raicu, Z. Zhang, M. Wilde, I. Foster. Towards Loosely-Coupled Programming on Petascale Systems, under review at SC 2008
Page 15 of 16
[33] NIST/SEMATECH e-Handbook of Statistical Methods, http://www.itl.nist.gov/div898/handbook/, 2007 [34] M. Feller, I. Foster, and S. Martin. GT4 GRAM: A Functionality and Performance Study, TeraGrid Conference 2007 [35] J. Frey, T. Tannenbaum, I. Foster, M. Frey, S. Tuecke, Condor-G: A Computation Management Agent for Multi-Institutional Grids, Cluster Computing, 2002. [36] E. Walker, J.P. Gardner, V. Litvin, E.L. Turner, Creating Personal Adaptive Clusters for Managing Scientific Tasks in a Distributed Computing Environment, Workshop on Challenges of Large Applications in Distributed Environments, 2006. [37] G. Singh, C. Kesselman E. Deelman. Performance Impact of Resource Provisioning on Workflows, USC ISI Technical Report 2006. [38] G. Mehta, C. Kesselman, E. Deelman. Dynamic Deployment of VO-specific Schedulers on
Managed Resources, USC ISI Technical Report, 2006. [39] K. Appleby, S. Fakhouri, L. Fong, G. Goldszmidt, M. Kalantar, S. Krishnakumar, D. Pazel, J. Pershing, and B. Rochwerger, Oceano - SLA Based Management of a Computing Utility, 7th IFIP/IEEE International Symposium on Integrated Network Management, 2001. [40] L. Ramakrishnan, L. Grit, A. Iamnitchi, D. Irwin, A. Yumerefendi, J. Chase. Toward a Doctrine of Containment: Grid Hosting with Adaptive Resource Control, IEEE/ACM International Conference for High Performance Computing, Networking, Storage, and Analysis (SC06), 2006. [41] J. Bresnahan. An Architecture for Dynamic Allocation of Compute Cluster Bandwidth, MS Thesis, Department of Computer Science, University of Chicago, December 2006.
Page 16 of 16