2012年2月16日星期四

FAST'12 Session 5: OS techniques

FIOS: A Fair, Efficient Flash I/O Scheduler
Stan Park and Kai Shen, University of Rochester

My takeaway: flash I/O scheduler should be adapted to SSD. Details about how to adapt found in paper.

high performance easy, fairness more concern
Linux schedular: lack flash awareness and anticipation support. Or too aggressive anticipation. Dosn’t support SSD
Observation: read fast and no variasion, write slow and vary
Policy: time slice management, acount for read/write asymmetry, anticipation and paralisim
Prefer reads over writes, linear cost model,
Anticipation not for performance, but for fairness
Evaluation: I/O slow down (latency?) considerbly less, and fair.

Q&A:
Q: What’s your definition of fairness (equal latency or equal throuput?)
A: Latnecy.
Q: Then that’s not really fair...
Q: This schecular not limited to files?
A: No, it’s more general.
Q: fairness with different priority class?
A: more time slice to pioriy process will work.









Shredder: GPU-Accelerated Incremental Storage and Computation
Pramod Bhatotia and Rodrigo Rodrigues, Max Planck Institute for Software Systems (MPI-SWS);Akshat Verma, IBM Research—India

My takeaway: Dedup being offloaded to GPU, and you need cleaver design for it

Motivation: dedup to store big data, incremental storage/computation, thus processing data in storage system bottleneck.
Use GPU to accelerate it.
However, GPU still can’t match bandwidth (GPU designed for computing intensive task instead of data intensive task). Thus we need new design for data intensive task
Basic desgin (transfer data to GPU for chunking, then transfer out) doesn’t scale:
1. host-device communication bottlenneck.
solution: asychrounous execution (start GPU computation earlier than you transferred data)
pinned circular ring buffers to adress that you need pin memory in host
2. device memory conflict (multiple threads in GPU contents moory bank)...
solution: coalescing memory? didn’t understand...
Evaluation: 5x spped up compared to multi-core, matches I/O bandwidth
Case study: incremental Map-Reduce with content based chunking.
Q&A:
Q: Even in HPC community, they still deal with data intensive tasks. How do you diferriciate your work?
A: O(n^2) v.s O(n)









Adding Advanced Storage Controller Functionality via Low-Overhead Virtualization
Muli Ben-Yehuda, Michael Factor, Eran Rom, and Avishay Traeger, IBM Research—Haifa;

My takeaway: VM a sweet spot to add storage controller?

Motivation: add support to new storage controller?
Options: deep integration (in OS) - hard
external gateway - low performance
VM gateway (their approach), need to adjust VM behaviour though.
Q&A:
Q: assign cores statically? What about VM belongs to differrent companies?
A:
Q: difference from virutual storage appliance?
A: ….

没有评论:

发表评论