2012年2月16日星期四

fast'12 Session 4: Flash and SSD's Part I


Reducing SSD Read Latency via NAND Flash Program and Erase Suspension
Guanying Wu and Xubin He, Virginia Commonwealth University

Missed that talk….:(






Optimizing NAND Flash-Based SSDs via Retention Relaxation
Ren-Shuo Liu and Chia-Lin Yang, National Taiwan University; Wei Wu, Intel Corporation

My takeaway: write faster with SSD, you get more performance but more error rate (you can correct them, but it reduces retention time)
Reliablity decreases as density increases
Retention relaxtion: as app don’t require long retention typically
Model Vth distribution: Old: flat + fix sigma Gaussian
New: sigma grow with time
Then they show the tradeoff between retention time and bit error rate using a curve.
Realistic workload: short retention time typically
System design: classify host writes(high performance,low retention time) and background writes(low performance, long retention time as contain cold data)
Mode selector + retention tracker (reprogramming a block when the block is about to run out of retention)
Evaluation: 2x speedup, 5x for hadoop.
Q&A:
Q: concerned about the retention time requirement conclusion you drawn. As 30% data is never touched, as they should be last!
A: that’s why we have the background migration.
Q: If you can guarantee the data need long retention was saved with normal retention.
A: Every week data is converted to long retention mode
Q: What’s the overhead of cleaning/converting
A: We measured using realistc trace.






SFS: Random Write Considered Harmful in Solid State Drives
Changwoo Min, Sungkyunkwan University and Samsung Electronics; Kangnyeon Kim, Sungkyunkwan University; Hyunjin Cho, Sungkyunkwan University and Samsung Electronics; Sang-Won Lee and Young Ik Eom, Sungkyunkwan University

My takeaway: seems like they just implemented log structure file system with hot/cold data differeication on the fly? Anyway, it works well on SSD. I am too sleepy and could missed some important points....

New file system for SSD
Background: sequential writes better than random writes
Optimization options: SSD H/W – high cost
FTL more efficient address mapping schemes – no info about fs, not effective for the on-overwrite fs
Applications SSD aware – lack of generity
That’s why fs approach
When writing in 64MB, reaches sequential performance.
So log structured file system, segment size is multiple of erase block size.
Eager on writing data grouping – differrciate hot and cold data. They categorize cold/hot data at run time.
Colocate blocks with same hotness into same segment when they first written
Hotness = write count / age
Segment hotness = mean write count of live blocks / how many live blocks
Some details about how to divide segments into different “hotness” groups.
Evaluatin: outperformans LFS, as segument utilization distribution is better (more empty and full segments)
Reduces block erase count inside SSD (thus prolongs lifetime of SSD)
Q&A:
Q: Ever though about compression?
A: key-insight is transforming randome writes into large sequential writes.
Q: why not compare your schem with DAC?
A: In our paper we compared
Q: why 64M chunk size?
A: based on SSD property.
Q: in practice SSD is smart and optimize for random write. They may break your segments inside?
A: good question! We measured and found that
Q: avaialbilty of SFS? Going to be a product?
A: no plan yet. Now open source.
Q: You cache until you have 32MB of data to write? But some app use sychrounous writes
A: synchronization is hard for SFS. We write as many as possible, and write all the remaing blocks.

没有评论:

发表评论