Informatica 39 (2015)431-442 431 Efficient Multimedia Data Storage in Cloud Environment Prachi Deshpande, S.C. Sharma and Sateesh K. Peddoju Indian Institute of Technology Roorkee-247667, India E-mail: psd17dpt,scs60fpt,drpskfec@iitr.ac.in Ajith Abraham Machine Intelligence Research Labs (MIR Labs), WA, USA IT4Innovations-Center of Excellence, VSB - Technical University of Ostrava, Czech Republic E-mail:ajith.abraham@ieee.org Keywords: BMA, cloud, compression, data storage, multimedia, SOA, web Received: June 24, 2014 With the rapid adoption of social media, people are more habituated to utilize the images and video for expressing themselves. Future communication will replace the conventional means of social interaction with the video or images. This, in turn, requires huge data storage and processing power. This paper reports a compression/decompression module for image and video sequences for the cloud computing environment. The reported mechanism acts as a submodule of IaaS layer of the cloud. The compression of the images is achieved using redundancy removal using block matching algorithm. The proposed module had been evaluated with three different video compression algorithms and variable macroblock size. The experimentations has been carried out on a cloud host environment by using VMWarework station platform. Apart from being simple in execution, the proposed module does not incur an additional monetary burden, hardware or manpower to achieve the desired compression of the image data. Experimental analysis has shown a considerable reduction in data storage requirement as well as the processing time. Povzetek: Predstavljena je izvirna metoda za ucinkovito shranjevanje multimedijskih vsebin v oblak. 1 Introduction With the advent of the concept of cloud computing, the problem of data storage has been solved for a while. The distributed nature of the cloud allowed storage of huge data without any hassle. The cloud computing is an amalgamation of different technologies such as networking infrastructure, service-oriented architecture (SOA), Web 2.0 and virtualization. The advent of Cloud computing has brought the complexities of underlying networks due to a variety of applications and data formats to generality. A time and location independent services are available for the users in the cloud due to the generalization. Considering these facts, efforts had been initiated to implement an own private cloud for research and analysis purpose [1,2]. From the last decade or more, the web has emerged as a powerful tool for social interaction. The possibility of use of multimedia content in the communication has revolutionized the social interaction. This had given rise to the intense growth in usage of multimedia enabled mobile appliances. In consequence, a huge volume of data is produced and processed on a daily basis in the forms of content (multimedia), structure (links) and usage (logs) [3]. The demand of information exchange, in particular, in the form of video/pictures had increased in many folds. Recently 'Microsoft' claimed that nearly 11 billion images had been hosted by its cloud storage service [4]. 'Face-book' has also announced a suppression of 220 billion of photos with an increase of 300 million images per day [5]. Categorization of such a huge chunk of data is very costly affair owing to the storage (hard disk) cost. The scenario may become worst when the overheads on the account of power consumption, cooling system and more significantly the skilled manpower recruitment were added up in the storage cost. In future, although processed via cloud setup, processing of user generated image data may be abstracted by its volume. Hence, an efficient mechanism to compress the image data along with an overall reduction in the storage cost over the cloud is the need of the hour. The compression mechanism also must observe the quality of the reconstructed image without the requirement of an additional hardware setup. A proper compression technique may reduce the burden not only on cloud storage but also on the application devices for processing the image data. In general, the images are stored in the joint photographic expert group (JPEG)/bitmap (BMP) file format. However, the individual compression achieved with these file formats may not be sufficient when a sequence of images (video) is to be stored. This is because the redundancy between the images is ignored while compressing them. 432 Informatica 39 (2015)431-442 Deshpande et al. Fig. 1 depicts an image sequence indicating the interrelation with each other (the motion) and the huge redundancy between them. A variety of approaches are reported in the literature to achieve the image data compression by redundancy removal approach. Some popular approaches are block matching (BMA), multiple to one prediction methodology, pseudo sequence, and content-based image retrieval (CBIR) [6-8]. In BMA, the redundancy is searched in the immediate next incoming frame. This method may be applied for similar or dissimilar images. Moreover, the search area is restricted to the incoming frame rather than the entire database. The multiple to one compression methodology is based on the hypothesis that for the similar images, the values of their low-frequency components are very close to that of their neighboring pixel in the spatial domain. A low-frequency template is created and used as a prediction for each image to compute its residue. The accuracy of this method is proportional to the similarity of the image data. Pseudorandom-based image compression exploits the statistical randomness between the subsequent images. This method requires an addition mechanism to extract the statistical characteristics of the image. The similar images are arranged into a tree structure. The compression methodology needs to be applied to each branch. In the context of cloud, this method seems to be very complex as the cloud may have a variety of images rather than similar one. CBIR is used to search digital images in large databases using the image attributes like colors, shapes, textures, or any other information that may be derived from the image itself. In this way, to achieve comprehensive compression of image data in the cloud paradigm, this method needs to search the entire database over the cloud. Considering the factors like the complex environment of the cloud, ease of searching process and speed of operation, a compression and decompression module for image and video data in the cloud computing environment have been proposed in this paper. Simple BMA is used in the proposed module to achieve the desired compression. The novelty of the present work lies in the verification of the proposed module with different block size and some fast BMA with an aim to study the quality of the reconstructed images. The simulations are also carried out in a cloud environment to validate the proposed concept. The rest of the paper is organized as: Section 2 presents the state-of-the-art multimedia usage over cloud environment. Further, the proposed approach is described in Section 3, whereas the results are discussed in Section 4. The article is concluded with further scope in Section 5. 2 Related work 2.1 Image compression It is believed that images and video will be the most preferred mode of communication in the next generation com- i rr j| I f y j on1 Figure 1: Video Stream. munication. This requires huge data storage. Hence, to cope with the data storage issue, the existing networks must be highly scalable. However, this option is very costly and complex to implement. Video/image compression provides an efficient way to eliminate the redundant information within images, which may lead to the less storage requirement and quick transmission of the images. An in-depth focus on the current and future technologies for video compression had been reported in [9]. JPEG, HEVC and H.264 are the prominent image compression standards available to minimize the superfluous information [10-12]. The pseudo sequence compression method suffers from two main drawbacks. Firstly, it requires highly correlated images for compression and secondly it does not compress beyond the limits of the sequence definition [7]. Hence, the inter-image redundancies may be a problem when used in a cloud scenario. Local feature description approach sounds good regarding quality of reproduced image [13]. It decides the reconstruction of the image by searching the similarity pattern over the entire available data sets. In the cloud scenario, it may encounter a huge database search. Searching and retrieving of the image over the internet may also be carried out by using the description of the images such as outline, semantic contents, segmentation of moving targets, sub-band coding and multiple hypergraph ranking [14-18]. However, all these methods suffer from one or more drawbacks such as large search area/database, the speed of search, quality of reproduced image and the method of removing the redundancies. Intra prediction and transform is another popular approach for image compression [19,20]. However, this approach suffered by the requirement of a highly correlated encoder and decoder. Hence, there is a need for a new mechanism to deal with the big data arises from future multimedia communication. BMA is a tool used for the judgment of matching blocks in video frames or images for motion estimation. This finds a matching block from a reference frame i and in Efficient Multimedia Data Storage in... Informatica 39 (2015)431-442 439 some other incoming (reference) frame j. BMA provides increased efficiency in interframe compression by identifying the temporal redundancy in the video sequence. This approach utilizes different cost functions to decide whether a given block in frame j matches the search block in frame i or else. Unlike its counterparts, BMA does not require a huge database search to reconstruct the image. It only searches the resemblance in the next immediate frame. The reconstruction quality of the image is also fair enough as it is based on the motion estimation in the successive images/frames. Hence, BMA is preferred in the proposed compression module. Table 1 provides a brief summary of the various approaches to the image compression. 2.2 Cloud-based multimedia data storage In the recent years, multimedia data processing over the cloud had become prominent due to the increasing use of image/video in social media platforms. Significant efforts were reported describing the progress of multimedia data processing over the cloud-based environment. A cloud-based multimedia platform for image and video processing is discussed by Gadea et al. [21]. However, this approach does not emphasize the reduction of storage requirement of multimedia data and its dependence upon the capacity of cloud environment for data storage. An SOA module for medical image processing using cloud computing was proposed by Chiang et al. [22]. However, this effort was also dependent on the ability of the cloud to store and process the multimedia data and never considered the cost incurred towards the data storage. Zhu et al. [23] had discussed the multimedia cloud computing in detail. They concentrated on the storage and processing of multimedia data and proposed a multimedia cloud. This approach was stuck up by the need of a separate arrangement for storing and processing the multimedia data in the cloud environment. In a server-based multimedia computing, a set of server deals with multimedia computing and the clients are controlled by the servers [24]. However, this method was suffered from the deployment cost. In peer to peer (P2P) multimedia computing, computing task was carried out in a piecewise manner between the peers [25]. This had improved the scalability at the cost the quality of service (QoS). Content delivery network (CDN) reduced the communication overhead. However, this approach was stuck up by the scalability challenge due to limited server capabilities [26, 27].A data middleware is proposed in [28] to overcome the I/O bottleneck issues of big data legacy applications. However, this approach was developed to support the requirements of the document stores. A dedicated media cloud concept is proposed in [29], wherein the cloud is only meant to process the multimedia data. A scale invariant feature transform was reported in [30] for image compression over the cloud. This approach was based on searching the similarity from all the images stored in the cloud, as the search accuracy was entirely dependent on the number of images available for the search. Recently a framework for multimedia data processing over the heterogeneous network was proposed [33]. All these methodologies never considered the huge storage memory requirement and allied overheads for the on-demand video/image access by the users. 3 Proposed approach Cloud computing is the best alternative to cope with the huge data storage requirement. The cloud computing may solve the data storage requirements at the cost of a huge monetary and infrastructure overhead. As the cloud services were based on the 'pay-per-use' concept, end users have to pay these overheads. The data storage cost may be minimized by compressing the image data before storage and decompressing it as and when required. This approach may minimize the monetary overheads by a great deal. A cloud-based compression and decompression mechanism (CDM) will solve this purpose. Hence, in this paper, a CDM has been reported to cater the need of video data storage over the cloud infrastructure. It utilizes the interframe coding to compress the incoming images with minimal burden on cloud resources. In general, a cloud had structured with three main layers of operation such as infrastructure as a service (IaaS); Platform as a service (PaaS) and Software as a service (SaaS). Each of them is meant for providing some specialized services to the cloud users. The IaaS layer deals with the storage of data on cloud mechanism. Hence, we suggest a CDM module at each virtual machine (VM) as a software abstraction in IaaS layer of the cloud to store the video data in the compressed form. Table 2 provides a brief comparison of the proposed method with existing approaches for processing multimedia data over the cloud. In this analysis, an H.264 based interframe predictive coding is used for eliminating the temporal and spatial redundancy in video sequences for effective compression. In typical prophetic coding approach, the distinction between the present frame and the anticipated frame had coded and transmitted. The anticipated frame had been dependent on the previous frame. The transmission bit rate is directly proportional to the prediction of the video frame. This approach was accurate for a still picture. However, for video sequences with a large motion, a better prediction is possible only with the proper information of the moving objects. Motion compensation (MC) is the phenomenon, which utilizes information of the displacement of an object in consecutive frames for reconstruction of a frame. The proposed CDM module consists of an encoder and decoder as shown in Fig. 2. At the encoder side the first frame of the image/video sequence is initially considered as the reference frame. The next frame is considered as the incoming frame. The individual image is divided into mac-roblocks of desired dimensions (i.e.16 x 16, 8 x 8,4 x 4). In Contribution Approach Methodology Disadvantage In cloud scenario Zou et al. [7] Pseudo sequence minimum spanning Exhaustive search is QoS degradation as the compression tree (MST) required for finding the base feature search area is large Rajurkar and CBIR Attribute based Loss of information Searching of a Joshi [8] search is very high due to compression specific attribute may be a burden on the cloud Wallace [10] JPEG converts each frame Images with strong Cloud have images of the video source contrasts donot with a variety of from the spatial (2D) compress well contrast in its storage domain into the frequency domain Wiegand et al. [12] H.264 Block based Computationally Interframe similarity similarity search complex search from a sequence Zhou et al. [13] S TFT Local feature Needs a huge database Entire cloud will description for correct reconstruction of images be the search area. Table 1: Image compression techniques. Contribution Data Processing Method Algorithm Search area Shi et al. [17] By separate encoder and Use of internal/ external Sub band Entire database decoder in the cloud correlation between target image and images in the cloud coding over cloud W. Zhu et al. [23] Dedicated couldlet servers Load balancer and cloud Feature extraction and Entire database placed at the edge of a cloud to provide media services proxy is used for processing image matching over cloud Hui et al. [29] By a dedicated cloud Searching by comparing the Attribute based Entire database mechanism features of incoming video with database search over cloud Yue et al. [30] By separate encoder and Searching similarity in the Scale invariant Entire database decoder in the cloud large scale image database available on the cloud feature transform over cloud Kesavan et at. [31] By a dedicated private The private cloud store, search, — Entire database cloud mechanism multimedia services to user multimedia services to user over cloud Hussein and Badr [32] By using native cloud Lossy and lossless Huffman encoding Entire database capacities compression techniques over cloud Proposed Approach Local encoder/decoder as Use of information in the Block matching Limited to the video/ a software abstraction video/image to be stored image to be stored. Table 2: Comparison of the proposed approach. 432 Informatica 39 (2015)81-442 Deshpande et al. Figure 2: Conceptual diagram of the proposed CDM. this study, the block-matching algorithm (BMA) is used as an MC tool for interframe predictive coding. BMA works on a block-by-block basis. It finds a suitable match for each block in the current frame from the reference frame. The comparison eliminates the similar part from the particular block of the current frame and provides a pixel (pel) position from which the motion (difference) appears. This pixel position is called as a motion vector (MV) corresponding a particular block of the image. This is a two-bit parameter (x, y) which is searched in either direction of a particular pixel with a search range parameter d. The search procedure is repeated for all pixel of a block, and a single motion vector is achieved. Hence, an image consists of MVs proportional to its block size. Thus, an effective compression may be achieved by transmitting only MVs rather than the entire frame along with the reference frame. Computationally lightweight search criteria's (cost functions) such as Peak signal to noise ratio (PSNR), Mean square error (MSE) and Mean absolute error (MAE) is used for the evaluation of a suitable match. Fig. 2 shows the architecture of the proposed concept. At the decoder side, the reference image and the MV information of the compressed image are required for the reconstruction of the image. The corresponding matching block is predicted based on the information of MVs and search range parameter d. Thus, the image is reconstructed by computing a block for each MV. The quality of the reconstructed image is validated by its PSNR value. The proposed mechanism may run as a software part with each VM in the cloud. Whenever, image/ video data is to be processed, this module will perform the compression or decompression as per the use requirement. This module will not incur any monitory or tactical burden on the existing mechanism, as no additional hardware is required. In this way, it may be the best alternative for reduction of data storage size and cost for the image/video data. Figure 3: Concept of macroblock matching [34]. 4 Result and discussions Fig. 3 shows the basic concept of BMA. It consists of a macroblock of M x N size, which is to be searched within a search window of size d in all directions of the macro block. The typical block size is of the order of 16 x 16 pixels. The output of a cost function influences the mac-roblock matching with each other. A macroblock with the least cost is the most excellent equivalent to current block under search. Eq. 1 to 3 provides the computationally efficient, cost functions such MAE; MSE; and PSNR. 'CPU TIME', an in-build function of MatLab, is used to estimate the computational time for the proposed method. In block matching approach, the CDM divides the incoming frame into a matrix of macroblocks. This macroblock are compared with the equivalent block and its neighboring blocks in the reference (previous) frame. The process forms a two-dimensional vector, which indicates the motion of a macroblock from one position to another in the reference frame. The motion in the incoming frame is anticipated based on the movement of all the macroblocks in a frame. To have an accurate macroblock match, the search range is confined to d pels in all four directions of the equivalent macroblock in the reference frame. d is the search range restriction and is proportional to the nature of motion. 1 M-1 N-1 MAE(dx,dy) = M-N^ E |C(x,y) - R(x,y)l i=0 j =0 M-1 N-1 (1) MSE(dx, dy) = MXN ^ E C^ - R(x,y)l2 MxN i=0 j=0 PSNR(dB) = 10log10 255 Vmse (2) (3) Efficient Multimedia Data Storage in... Informatica 39 (2015)431-442 439 Here, C(x,y) = the pels in the current macroblock and R(x,y) = the pels in the reference block. M and N= the size of the macroblock. A motion compensated image is formed with the knowledge of motion vectors and macroblocks from the reference frame. The cost functions like PSNR and MSE are used to determine the quality of the prediction while CPU TIME was used to estimate the computational complexity. The full search (FS) algorithm involves large computations for block matching purpose since the cost functions were calculated at each possible position in the search range. Hence, BMA provides an excellent match with highest PSNR but at the cost of enormous computation. Fast BMAs like the 3-step search (TSS) and 4-step search (FSS) [35] provide a compatible PSNR with that of FS method with reduced computation. Hence, in the proposed work, the fast BMAs had been evaluated in the cloud environment along with FS method. The different macroblock size is also used to predict the performance of the proposed module. The performance of the cloud setup under these conditions is also evaluated. Fig. 4 shows the software abstraction used for the experimentation of the proposed method. Here the virtualization has been achieved by using the VMware workstation with Windows 7 as the host OS. Two VMs were also created on the host OS with Windows 7 as the guest OS. The performance was analyzed by executing the algorithms on the host OS. denced that lower the macroblock size higher is the detection accuracy. The results obtained from choosing 16 x 16 or 8 x 8 macroblock size were also compatible with that of 4 x 4 macroblock size. Hence depending on the need of the application a particular block size may be chosen. The PSNR, in all cases, is well above 30dB. This is required for the proper reconstruction of the image/video at the receiver side [36]. The CPU time indicates the total time required by the CDM module to complete the encoding as well as decoding operation. By observing the 'CPU time' reading for all the algorithms with different block size, it may be inferred that the proposed module is competent enough to be used in the real-time applications also. The fair PSNR value and the 'CPU time' parameters indicate a competitive QoS regarding the reconstructed images. The proposed method may be easily adapted for the continuous video streams. In this case, the reference image frame will be exchanged with a particular incoming frame, if its PSNR degrades below 30dB after reconstruction. The principle advantage of the CDM module is the reduction in the storage size (memory) for the video data. The storage size of a gray scale image is estimated by using Eq. 4. FS(bytes) = Hp x VP x Bd 8 (4) Figure 4: CDM deployment in cloud environment. The performance of the FS, TSS and FSS was verified with a standard block size of 16x16 pels. Further, the evaluation was carried out with a block size of 8 x 8 and 4 x 4 pels to decide the accuracy of the image reconstruction. Two gray scale images were captured and stored in BMP format with a width of 320 pels, the height of 240 pels and a depth of 8 bits. Table 3 summarizes the results of BMA algorithms along with different block size. From Table 3, it is evi- Where FS=File size; HP= Horizontal pels; VP= Vertical pels; BD= Bit depth. In traditional data storage, a M x N size video frame requires MxN bits of memory. However, in the case of CDM approach, the video frame had been divided into n equal sized macroblocks, and each macroblock is represented by its motion vector(x,y). Each motion vector requires only two bits for its storage. Hence, a video frame with n macroblocks requires only 2n bits for its storage. Thus, the storage requirement has been reduced significantly. Table 4 summarizes the storage requirement with the proposed approach for an image/frame size of 320 x 240 pels with a 8-bit depth. The approach proposed in [17] depends on the hypothesis that large-scale images in the cloud were always available for the similarity search. This methodology was tested on the commercially available database 'ZubBud'. The highest PSNR of 33.57dB for the reconstructed image was obtained by using this method. The approach proposed in [23], [29] and [31] never commented on the reconstruction quality of the image/video. The approach proposed in [30] used a separate encoder/decoder mechanism for the image data compression and decompression in the cloud. The methodology was verified by using INRIA Holiday data set. This methodology provided the highest PSNR of 29.65dB for the reconstructed image. In the present analysis, an indigenously captured image sequence is used for analysis. The proposed methodology provides a highest PSNR of 39.50dB with a TSS. However, for FS and FSS methodology, the resulted PSNR value is higher than the highest value of methods in [17] and [30]. Moreover, the existing methodologies suffer from 432 Informatica 39 (2015)83-442 Deshpande et al. Algorithm Block Size (M x N) CPU Time (s) PSNR(dB) MSE MAE Full Search(FS) 16 x 16 18.04 33.75 10.90 3.30 8 x 8 22.50 36.42 14.81 3.84 4 x 4 39.03 37.20 18.48 4.29 Three Step Search(TSS) 16 x 16 6.67 33.50 33.81 5.38 8 x 8 21.93 37.50 11.55 3.34 4 x 4 80.40 39.50 7.28 2.69 Four Step Search(FSS) 16 x 16 8.85 33.41 29.64 5.44 8 x 8 28.06 37.81 10.74 3.32 4 x 4 105.70 38.90 8.37 2.90 Table 3: BMA analysis with different macroblock. Approach Block size No.of Memory (pels) Macroblock Requirement(bytes) Traditional - - 76800 16 x 16 300 75 FS/TSS/FSS 8 x 8 1200 300 4 x 4 4800 600 Table 4: Analysis of storage requirements. Contribution Cloud specific disadvantages Shi et al. [17] Similarity search image for compression is carried out by searching database over the entire cloud. Zhu et al. [23] Separate Cloudlet servers are required. Similarity search for image compression is carried out by searching database over the entire Cloud. Hui et al. [29] Independent media Cloud is required to process multimedia data. Yue et al. [30] Separate encoder and decoder mechanism is required. Table 5: Cloud specific limitations of the existing methodologies. Contribution Data set used PSNR (dB)* Shi et al. [17] ZubBud [37] 33.57 Zhu et al. [23] - - Hui et al. [29] - - Yue et al. [30] INRIA Holiday [38] 29.65 Proposed Approach Original image sequence 39.50 Table 6: QoS analysis of the proposed methodology. * Highest PSNR Value Efficient Multimedia Data Storage in... Informatica 39 (2015)431-442 439 (i) (j) (k) Figure 5: (a) Reference frame,(b) Current frame, (c) FS 16 x 16 block,(d) FS 8 x 8 block,(e) FS 4 x 4 block,(f) TSS 16 x 16 block, (g) TSS 8 x 8 block,(h) TSS 4 x 4 block,(i) FSS 16 x 16 block, (j) FSS 8 x 8 block and (k) FSS 4 x 4 block. Algorithm Macroblock size CPU Usage (%) Memory Usage (%) 16 x 16 block 7 36 FS 8 x 8 block 4 36 4 x 4 block 2 36 16 x 16 block 5 36 TSS 8 x 8 block 3 35 4 x 4 block 1 35 16 x 16 block 4 35 FSS 8 x 8 block 2 36 4 x 4 block 1 36 Table 7: CDM performance in cloud environment. 432 Informatica 39 (2015)85-442 Deshpande et al. the drawback of the vast search area in the cloud or a separate and dedicated cloud arrangement for the multimedia data processing. However, the proposed module only depends upon the information of the incoming and previous frame for its operation. Hence, the storage requirements and search operation are reduced to a great deal. This module does not require any separate hardware arrangement and will work as a software abstraction in the cloud infrastructure. This is the catch line advantage of the proposed approach over the other reported methods. Fig. 5 depicts the motion compensated video frames, and it may be concluded that the fast BMA overrules the computational burden with FS BMA, without considerable compromise with the quality of the reconstructed image. The different block size is also an added advantage to achieve a fair quality of the reconstructed image. Fig. 6 shows the cloud performance for the proposed CDM module with the help of 'resource manager'. The performance has been analyzed regarding CPU and memory usage. Table 7 gives the performance analysis of the proposed setup under cloud environment. It is evident from Table 7 that the proposed module never accessed the network resources to perform its operation. The physical memory usage is also constant around 3536% throughout the analysis for different BMA. The CPU usage is varied according to the macroblock size. In all cases, the CPU usage never increased beyond 10% of its maximum capacity. This indicates that the proposed module works without causing an additional burden on the available resources. 0 Figure 6: CPU and memory performance for the proposed module for FS (16 x 16 block). 5 Conclusions The paper reports CDM, a compression and decompression module for cloud-based video data processing and storage. This module may be placed in each VM as a software abstraction at IaaS (Saslaas) layer of the cloud architecture. The novelty of the present work is the analysis of BMA with different block size in a cloud environment. With the proposed module, the requirement of a dedicated cloud for data processing and storage has been overruled. With the deployment of the proposed module, the multimedia data storage requirement is reduced with minimal overheads. The proposed module demonstrated a fair QoS regarding the reconstructed images. Hence, this approach may be the best candidate for the future generation information processing technology. In future, efforts may be initiated to design an adaptive CDM module to minimize the trade-off between a selection of a precise BMA algorithm along with a suitable block size for a specific application. References [1] P. Deshpande, S. Sharma and S. K. Peddoju, Deploying a private Cloud: Go through the errors first, Proceedings of Conference on Advances in Communication and Control Systems, Deharadun, India, Apr. 2013, pp. 638-641. [2] P. Deshpande, S. Sharma, S. Peddoju, "Installation of a private cloud: A case study", Advances in Intelligent Systems and Computing, vol. 249, no. 2, pp. 635-648, 2014. [3] Z. Zhang, Z. Zhengyou, R. Jain, "Socially connected multimedia across cultures", Journal ofZhe-jiang University-SCIENCE C, vol. 13, no. 12, pp. 875-880, 2012. [4] J. Ong, Picture this: Chinese internet giant tencent's Qzone social network now hosts over 150 billion photos 2012 [Available:] http://thenextweb.com/asia/2012/08/09/ picture-this-chinese-internet-giant-tencents -qzone-social-network-now-hosts-over-150 -billion-photos [5] K. Kniskern, How fast is SkyDrive growing? 2012 [Available:] http://www. liveside.net/2012/10/ 27/how-fast-is-skydrive-growing [6] C. Yeung, O. Au, K. Tang, Z. Yu, E. Luo, Y. Wu, and S. Tu, "Compressing similar image sets using low frequency template", Proceedings of IEEE International Conference Multimedia and Expo, Barcelona, Spain, Jul. 2011, pp. 1-6. [7] R. Zou, O. Au, G. Zhou, W. Dai, W. Hu, and P. Wan, "Personal photo album compression and management", Proceedings of IEEE International Symposium on Circuits and Systems, Beijing, Chaina, May 2013, pp. 1428-1431. Efficient Multimedia Data Storage in... Informatica 39 (2015)431-442 439 [8] A. Rajurkar and R. Joshi, "Content-based image retrieval in defense application by spatial similarity", Defence Science Journal, vol. 52, no. 3, pp. 285-291, Jul. 2002. [9] L. Yu and J. Wang, "Review of the current and future technologies for video compression", Journal of Zhejiang University-SCIENCE C, vol. 11, no. 1, pp. 1-13,2010. [10] G. Wallace, "The JPEG still picture compression standard", IEEE Transactions on Consumer Electronics, vol. 38, no. 1, pp. xviii-xxxiv, Feb. 1992. [11] JCT-VC, WD6: Working draft 6 of high-efficiency video coding. JCTVC-H1003, JCTVC Meeting, San Jose, Feb. 2012. [12] T. Wiegand, J. Sullivan, G. Bjontegaard, and A. Luthra, "Overview of the H.264/AVC video coding standard", IEEE Transaction on Circuits and Systems for Video Technology, vol. 13, no. 7, pp. 560-576, Jul. 2003. [13] W. Zhou, Y. Lu and H. Li, "Spatial coding for large scale partial-duplicate web image search", Proceedings of the 18th ACM international conference on Multimedia, Firenze, Italy, pp. 511-520, Oct. 2010. [14] J. Smith and S. Chang, "Visualseek: A fully automated content based image query system", Fourth ACM international conference on Multimedia, Boston MA USA, 1996, pp. 87-98. [15] M. Lew, N. Sebe, C. Djeraba and R. Jain, "Content-based multimedia information retrieval: State of the art and challenges", ACM Transaction on Multimedia Computing, Communication Application, vol. 2, no. 1,pp. 1-19, Feb. 2006. [16] Y. Zhou, A. Shen, and J. Xu, "Non-interactive automatic video segmentation of moving targets", Journal of Zhejiang University-SCIENCE C, vol. 13, no. 10, pp. 736-749, 2012. [17] Z. Shi, X. Sun and F. Wu, "Cloud-based image compression via subband-based reconstruction", PCM-2012, Lecture Notes in Computer Science, vol. 7674, pp. 661-673,2012. [18] Y. Han, J. Shao, F. Wu and B. Wei, "Multiple hypergraph ranking for video concept detection", Journal of Zhejiang University-SCIENCE C, vol. 11, no. 7, pp. 525-537, 2010. [19] G. Sullivan and J. Ohm, "Recent developments in standardization of high efficiency video coding (HEVC)", SPIE, vol. 7798, pp.77980 V1-V7, 2010. [20] R. Song, Y. Wang, Y. Han and Y. Li, "Statistically uniform intra-block refresh algorithm for very low delay video communication", Journal of Zhejiang University- SCIENCE C, vol. 14, no. 5, pp. 374-382, 2013. [21] C. Gadea, B. Solomon, B. Ionescu and D. Ionescu, "A Collaborative Cloud-based multimedia sharing platform for social networking environments", 20th International Conference on Computer Communications and Networks, Maui, HI, 2011, pp. 1-6, [22] W. Chaing, H. Lin, T. Wu, and C. Chen, "Building a Cloud service for medical image processing based on service orient architecture", 4th International Conference on Biomedical Engineering and Informatics, Shanghai, Oct. 2011, pp. 1459-1465. [23] W. Zhu, C. Luo, J. Wang and S. Li, "Multimedia Cloud computing", IEEE Signal Processing Magazine, vol. 28, no. 3, pp. 59-69, May 2011. [24] K. Lee, D. Kim, J. Kim, D. Sul and S. Ahn, "Requirements and referential software architecture for home server based inter home multimedia collaboration services", IEEE Transactions on Consumer Electronics, vol. 50, no. 1, pp. 145-150, Feb. 2004. [25] L. Zhao, J. Luo, and M. Zhang, "Gridmedia: a practical peer-to-peer based live video streaming system", 7th IEEE Workshop on Multimedia Signal Processing, Shanghai, Nov. 2005, pp. 1-4. [26] G. Fortino, C. Mastroianni and W. Russo, "Collaborative media streaming services based on CDNs", Content Delivery Networks, LNEE, vol. 9, no. 3, pp. 297-316,2008. [27] N. Carlsson and D. Eager, "Server selection in large-scale video-on-demand systems", ACM Transactions on Multimedia Computing, Communications and Applications, vol. 6, no. 1, pp. 1-26, Feb. 2010. [28] K. Ma and A. Abraham, "Toward lightweight transparent data middleware in support of document stores", Proceedings of the third World Congress on Information and Communication Technologies (WICT 2013), Hanoi, Vietnam, Dec. 2013, pp. 255-259. [29] W. Hui, C. Lin, and Y. Yang, "Media Cloud: A new paradigm of multimedia computing", KSII Transactions on Internet and Information Systems, vol. 6, no. 4, pp. 1153-1170, Apr. 2012. [30] H. Yue, X. Sun, J. Yang and F. Wu, "Cloud based image coding for mobile devices-towards thousands to one compression", IEEE Transactions on Multimedia, vol. 15, no. 4, pp. 845-857, Jan. 2013. [31] S. Kesavan, J. Anand and J. Ayakumar, "Controlled multimedia cloud architecture and advantages", Advanced Computing: An International Journal, vol. 3, no. 2, pp. 29-40, 2012. 432 Informatica 39 (2015)87-442 Deshpande et al. [32] S. Hussein and S. Badr, "Healthcare Cloud integration using distributed Cloud storage and hybrid image compression", International Journal of Computer Applications, vol. 80, no. 3, pp. 9-15, 2013. [33] Y. Xu, C. Chow, M. Tham and H. Ishii, "An enhanced framework for providing multimedia broadcast/multicast service over heterogeneous networks", Journal ofZhejiang University-SCIENCE C, vol. 15, no. 1,pp. 63-80, Jan. 2014. [34] A. Barjatya, "Block matching algorithms for motion estimation", DIP 6620 spring 2004 Final Project Paper, pp. 1-6, 2004. [Available at:] http://profesores.fi-b.unam.mx/maixx/Biblioteca/ Librero_Telecom/BlockMatchingAlgorithmsForMotion Estimation.PDF [35] L. Po and W. Ma, "A novel four step search algorithm for fast block motion estimation", IEEE Transactions on Circuits and Systems for Video Technology, vol. 6, no. 3, pp. 313-317, Jun. 1996. [36] S. Welstead, "Fractal and wavelet image compression techniques", SPIE Publication, pp. 155-156, Dec. 1999, ISBN : 9780819435033. [37] Eth-Zurich, Zurich building image database. [Available at]: http://www.vision.ee.ethz.ch/showroom/ zubud/index.en.html. [38] H. Jegou and M. Douze, "INRIA Holiday Dataset2008". [Available at]: http://lear. inrialpes. fr/people/jegou/data.php.