Coflow: A Networking Abstractions for Cluster Application
HotNets'2012 UC-Berkeley (Same guys behind the "Resilient Distributed Datasets")
Key Ideas/Takeaway:
1. The completion time of a cluster application depends more on the fate of a collection of flows, instead of individual flows.
2. Most application needs can be expressed in terms of minimizing completion time or meeting deadlines. (really?)
3. Flows can be decoupled in time (by using storage) and in space (by using broadcast or multicast). --- Note: this is an example of network using storage!
4. Typical cluster application dataflow patterns: mapreduce, dataflow with barriers (multi-stage map-reduce, e.g., pig), dataflow without explicit barriers (dryad), dataflow with cycles (spark), bulk synchronous parallel (parallel scientific computing), partition-aggregate( google search engine).
API:
four player: driver (cluster coordinator, or cloud controller), sender, receiver, network
create(pattern,[options]) => coflow handle, called by driver, pattern may be shuffle, broadcast, aggregation etc.
update(handle, [options]) => result, called by driver
put(handle, flow id, content, [options]) => result, called by sender
get(handle, flow id, [options]) => content, called by receiver
terminate(handle,[options]) => result, called by driver
Underlying Assumption:
coflow assumes a fixed set of senders and receivers, the driver has to determine them without network participation; i.e., they exclude the possibility that the network determines where to put replications, etc. This might be mitigated by using candidate senders/receivers, I am not quite sure.
"co-flow comes into action once you have already determined where your end-points are located. If the decision of end-point placement is not good one, there is only a limited opportunity"
Questions:
1. How does this work in a virtualized environment?
2. How does the cloud controller coordinate multiple coflows? They proposed sharing (reservation based), prioritization and ordering. This gives the cloud controller a way to allocate (the abstracted) network resources? What is the implication?
3. How does network coordinate requests from multiple cloud controllers: not explicit in paper.
4. What network topology is presented to the cloud controller? The real topology? Rings?
5. How does this framework handles the situation where network is not the bottleneck? It has to dynamically interact with computation unit and storage unit?
Programming Your Network at Run-time for Big Data Applications
HotSDN'2012 IBM Watson, Rice University
Key ideas:
1. Application manager sends a traffic demand matrix to the SDN controller, which in turn use this information to optimize network (using optical switch to setup topology, etc.). This traffic demand matrix is estimated using application level knowledge.
2. Application manager, based on the knowledge that it is operating on an optical switch enabled network, could do some simple optimizations, e.g., aggregate reducers in the same rack, submit requests to SDN in batch. Here application manager uses two pieces of network information: it is optical switch, and which nodes are in the same rack.
3. Used the traffic matrix (?), they argued for some efficient implementation using optical switch on some particular communication patterns, e.g., aggregation, shuffling, or overlapped aggregation.
Comments:
1. This work is based on rack granularity, because only ToR has optical links.
2. Another paper mixed API and implementations.......=.=
3. I am really sick of Hadoop...-__-b!!!
Questions:
1. What is the network topology presented? A ring per rack?
Fabric: A Retrospective on Evolving SDN
HostSDN'2012, Nicira and UC Berkeley, Scott Shenker
This paper advocates for a router/switch chassis style implementation of network
Before the paper:
1. I love Scott Shenker!!!! I almost agree on every word he said about SDN!
2. We definitely need some layer 2.5 addressing, since both IP and MAC have fundamental deficiencies.
What's the API of network:
1. Host --- Network interface
Host asks the network to send their packets, along with QoS requirements.
Currently it is done by packet header
2. Operator -- Network interface
Operators give requirements and decisions of the network operation to the network
Currently it is done by box-to-box router configuration. SDN provides a more programmable interface of it, be decouping the distribution model of the control plane to the topology of te data plane
3. Packet -- Switch interface
How a packet identifies itself to a switch. The switch then uses this piece of information to do forwarding thus actually implementing the connectivity of the netowrk.
Currently this is done by packet header
The problem is that we don't currently distinguish the Host-Netowrk interface and the Packet-Switch interface, thus unnecessarily couples network service implementations (isolation, security, etc.) and core connectivity implementation.
The fabric architecture:
1. hosts, which asks for network service
2. edge switches, which implement network services, using current header and protocol (e.g., IPv4)
3. core fabrics, which implement network connectivity, using its own label potentially (like MPLS)
This is very much like the internal architecture of a modern switch
(Two versions of) SDN should be introduced separately to edges for service management (complex), and to core for connectivity management (very basic).
Meso: Fine Grain Resource Sharing in Data Center
NSDI'2010, UC Berkey, Scott Shenker and Ion Stoica
Key Idea: Resource Offers
In stead of doing scheduling, make resource offers and push scheduling decisions to the framework applications. E.g., Meso offers two nodes with 8GB Ram, Hadoop decides wthether to take this offer and which task to launch on it.
A more traditional approach would be applications express its needs in a (specially designed) language and a central scheduler schedules based on these needs. But what if application has needs which can't be expressed in such a language? Also, Hadoop already has scheduling logic, why not utilize it?
The Datacenter Needs an Operating System
HotOS'2011, UC Berkeley, Scott Shenker and Ion Stoica
Think the datacenter as the new computer, and think the datacenter infrastructure problem from an OS perspective.
A datacenter OS needs to provide:
1. Resource sharing
Hadoop already does scheduling between jobs.
Unsolved: inter-framework sharing, sharing the network, independent services, and virtualization
2. Data sharing
Currently done in the form of distributed file system
Unsolved: standardized interfaces (like VFS?), performance isolation, etc.
3. Program abstractions
including communication primitives
4. Debugging and Monitoring
Questions:
If we think of Hadoop as a form of data center OS, where does it fall short?
Location, Location, Location! Modeling Data Proximity in the Cloud
HotNets'2010, MSR and U Mich
Key Idea:
Insert a layer (which they call Contour) between application and key-value store, which report to the application the latency of accessing a particular key.
To calculate this, key-value store report to Contour a replication topology for each key, Contour combined this information with the network latency etc. to calculate update latency.
It suffers from security problem as revealing too much details about the storage layer to the application.
没有评论:
发表评论