where is the bottleneck (app? bandwidth? middlebox?)
Fast bottleneck Identification:
Processing Time ( hard for packets in/out)
CPU/Mem info (hard to decide how often to sample)
Open Connections
Strato: captrue MB and NW bottlenecks
Use Greddy Heuristic (tentatively add middle box) --- doesn't work in complex MB topologies
Refinement: most common used (overlapping) MB first?
Midldleboxes scaling: (Aaron)
Move some of the middleboxes control to Controller.
1. How is the logic devided?
classify middlebox states, and define interfaces between middleboxes and controller
Action state + support state + tuning state
Represent states: key (Field1 = value1, field2 = value2....) + Action (drop, forward, etcc.)
interfaces: get, remove, add states
NaPs: network-aware placement and scheduling in Clusters (yizheng):
motivation: little work to examing the interplay between CPu/memory resource sharing and network resource sharing (by TCP congestion controll, etc)
Quincy placement: put instances as close as possible (not optimial)
NaPs design:
general framwork to enable network awareness of clusture:
Lowest level: sdn controller to expose network state
Higher level: cluster shedular to communicate with sdn controller for netowrk status, work nodes for workload info and storage for data placement information
没有评论:
发表评论