To search, Click
below search items.
|
|
All
Published Papers Search Service
|
Title
|
Secure Static Data De-duplication
|
Author
|
Rohit Pawar ,Payal Zanwar, Shruti Bora, Shweta Kullkarni
|
Citation |
Vol. 16 No. 3 pp. 69-73
|
Abstract
|
Data de-duplication is a technique used to improve storage efficiency. In static data de-duplication system, Hashing is carried out at client side. Firstly hashing is done at file level. The de-duplicator identifies duplication by comparing existing hash values in metadata server. If match is found, then logical pointers are created for storing redundant data. If match doesn't exist, then same process is carried out at chunk level. Duplicated data chunks are identified and only one replica of the data is stored in storage. Logical pointers are created for other copies, instead of storing redundant data. If it is a new hash value, it will be recorded in metadata server and the file or corresponding chunk will be stored in file server and its logical path in terms of logical pointers is also stored in metadata server. Basically static de-duplicator is implemented with three components: interface, de-duplicator and storage. Interface carries out hashing of uploaded file and interfaces client with de-duplicator. After receiving hash value, de-duplicator carries out its function as mentioned above. The last component storage consists of file server and metadata server. Thus, de-duplication can reduce both storage space and network bandwidth.
|
Keywords
|
SHA-Secure Hash Algorithm, TTTD-Two Threshold Two Divisor
|
URL
|
http://paper.ijcsns.org/07_book/201603/20160310.pdf
|
|