2016年5月5日星期四

MSST16 Session 4: Spotlight on Flash memory and Solid-State Drives

Adaptive policies for balancing performance and lifetime of mixed SSD arrays through workload sampling

high-end SSD: cache
Low-end SSD: main storage 

1 high-end SSD cache for 3 low-end SSD: high-end SSD life is 1.47 years versus low-end 6.34 years assuming LRU cache policy

problem: high-end SSD cache can wear out faster than low-end SSDs main storage 

approach: balance the performance and lifetime at the same time 
metric: optimize latency over lifetime (less is better) 

selective caching policies ---> decide cache policy based on request size and hotness 

REAL: A Retention Error Aware LDPC Decoding Scheme to Improve NAND Flash Read Performance 

error correction codes: BCH, LDPC

Analytic models for flash-based SSD performance when subject to trimming 

SSD structures: N blocks, b pages per block, unit of data exchange is a page, page has 3 possible states: eras, valid or invalid

data can only be wrtten on pages in erase state
erase can be performed on whole block only 

assume j valid pages on a victim block with probability pj,
write amplification A equals 
A = b/(b - sum(j*pj)) 

prior work: mostly assumes uniform random writes and Rosenblum(hot/cold) workloads 
               exact (closed form) results when N -> infinity 
                  1. greedy is optimzed under random writes, d-choices close to optimal (for d as small as 10) 
                     2. increaseing hotness worsens WA in case of single WF (as no hot/cold data sepeartion takes place) 
                     3. Double WF (seperates writes triggered by hot and GC): WA decreases with hotness (as partial hot/cold data separation takes place) 
                      However, they all assume no trimming

How do we model trim behavior? 
      
Main takeaway: 
trimming results in effective load (utilization)

Reducing Write Amplification of Flash Storage through Cooperative Data Management with NVM

write amplification and GC causes SSD performance fluctuation 

in traditional systems, all live pages need to be copied to another block whiling erasing
however, CDM skips coping 
"removable" state: can be erased if the data needed to be copied into 


issue 1: consistency ---> file system needs to be modified 
issue 2: communication overhead -> events in cache and storage should be notified to each other synchronously --> use NVM-e to piggyback 

NV-cache as in-storage cache

evaluation: 
CDM reduces write-amplification by 20x, improves response time as well

Exploiting Latency Variation for Access Conflict Reduction of NAND Flash Memory

motivation: 
ECC complexity, ECC capability and read speed tradeoffs: high sensing level means preciser memory and higher ECC capability 
program size and write speed tradeoff: 
process variable and retention variation leed to speed variation 


hotness-aware write scheduling: 
retention aware read scheduling
write: size-based predicted hotness
read: 

evaluation: 







没有评论:

发表评论