New FTL Scheme Experimental Setup

11 Apr

Wear-leveling mechanism is relatively independent from the other components. Wear leveling involves a lot of questions, such as how to identify the old block, block recovery and valid data on where. On applications and the architectural style of the system should also be considered.

If you support multi-process, garbage collection and loss of balance in the background other operations will not be interrupted. However, in the embedded environments or real-time systems, these functions can be possible to do on demand because of the background may be impossible or finalize respectively. All in all, wear leveling is another interesting research topic, many of the existing works have studied this issue.

Rearrangement of all combinations, it is difficult to find a representative strategy is unrealistic. Another reason omitted wear leveling is to provide a clear view of the performance comparison of different FTLS. Wear-leveling’s sake, some of the data will need to shabby purpose from the cold block, which will introduce a lot of noise operation. As our article focuses on address translation and data organization, it is best to not be distracted from the other components. We believe that the same wear leveling strategies will also affect all plans and will not affect the results of our simulation.

In order to help evaluate the possibility of further improvements suggested LazyFTL We also compared LazyFTL theoretically best solution. This means that each page read request will result in a read operation on a single page, each page write request will result in a one-page write and block erase operation is called every 64 write operations.

Implementation the NFTL is-N, we found that, at the beginning of the write request response time to reduce the length limit rapid replacement of the block list to enlarge. Around 7 at a certain point, however, the write performance is stabilized and constant. Another issue surprised us is that it seems to search for cost-reading requirements easing of restrictions, and did not add much. This is probably because when the the target proportion flash little opportunity to replace the list of blocks to obtain a longer recovery before, even if the limit has been zoom back.

In our experiments, the the largest replacement block length of the list is set to 16. Finally, there is a relatively complex adjustment. In our experiments, the 8 consecutive blocks are allocated to the log block area 120 and other block the LBA is random log block. The threshold cold thermal partition utility partition are set to 0.2 and 0.9 respectively. Other plan does not require special adjustments.



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: