2012年2月17日星期五

FAST'12 Session 7: a bit of everything

Extracting Flexible, Replayable Models from Large Block Traces
V. Tarasov and S. Kumar, Stony Brook University; J. Ma, Harvey Mudd College; D. Hildebrand and A. Povzner, IBM Almaden Research; G. Kuenning, Harvey Mudd College; E. Zadok, Stony Brook University

Main idea: take standard traze, then divide them into chunks, define featuret function to turn traces into a muliti-demention diagrame, to trade accuracy for size reduction.
Q&A:
Q: Latency for dependent writes.
A: We don’t defer dependency. That’s too hard. But in general, people don’t do that with pure replay neither.






scc: Cluster Storage Provisioning Informed by Application Characteristics and SLAs
Harsha V. Madhyastha, University of California, Riverside; John C. McCullough, George Porter, Rishi Kapoor, Stefan Savage, Alex C. Snoeren, and Amin Vahdat, University of California, San Diego

I really like this one!!!!

How to do hardware provision to achieve performance goal and reduce cost.
Goal: understand configuration space (now and in future)
SCC: blocks+app workload + SLA = SLA-cost curve and for each cost, an instantion.
Sever Model: CPU+RAM+disk+network
App model: tasks + datasets + edges between tasks and datasets (I/O) + network?
SLA: operations per second
Compute: from input to output, some details like providing cache, enough CPU and stroage, etc.....
Evaluation: 4x reduce of cost for same SLA!!!!
Future: cloud employment model?
Q&A:
Q: energy cost? Future more demanding SLA on top of your current configuration?
A: We do include some power cost. We do look at if our configuration scales easily.
Q: Some thing about how you do simulation on computation cost?
A: Blah blah...didn’t understand...
Q: How to optimize software and hardware tunning all together to get maxiume performance?
A: Future work?
Q: Multiple app on same cluster?
A: Future work, as apps interfere with each other it will be hard.





iDedup: Latency-aware, Inline Data Deduplication for Primary Storage

Kiran Srinivasan, Tim Bisson, Garth Goodson, and Kaladhar Voruganti, NetApp, Inc.

How to do dedup in primary system (traditionally it is an offline feature)
Why inline dedup: no over provisioning for burst
no background processing
efficient use of resources
Key Design: 1. Only dedup consecutive blocks to reduce seek time, thus lower overhead (sequnce lenght configurable)
2. in memory FPDB: possible because for primary storage, it is smaller (size of FPDB cache also configurable)
Q&A
Q: You lost a lot of dedup opportunites in primary. As from one backup to another backup, there are a lot of duplication.
Q: In reality, outstanding I/O varies and bursts
A: That is a concern. We haven’t address it.

没有评论:

发表评论