1 Introduction

Since digital cameras are easily accessible and relatively cheap, many people take dozens of pictures on a daily basis. Furthermore, the pictures are shared using social media services or photo portals with friends, but also with thousands of coincidental onlookers. Posting hundreds of photos made using the same digital camera or a smartphone allows us to be linked to a certain device or a model, hence has negative consequences from the privacy point of view. The more privacy-aware of the social media users remove the meta-data indicating the time, location and the device model using image editors, however the linking a picture with a device does not require the additional meta-data. This can expose users’ privacy to a serious threat if they post an image outside of their personalized social media profiles, identifying pictures’ author may lead from slight inconvenience like personalized spam to serious criminal activity like stalking or user personalized phishing. Naturally, the techniques of linking a device to an user might serve as well crime prevention. One of the most popular issue in image processing and image forensics is recognizing the camera based on images and using it as a “digital fingerprint” or proof of presence. An algorithm proposed by Lukás et al.’s in [21] is considered to be most efficient and is commonly used by black- and whitehats alike. Many recent approaches using deep learning models use image denoising formula presented in this approach. The assumption of Lukás et al.’s algorithm is to calculate the so called Photo-Response Nonuniformity Noise (PRNU) for each photo. It is shown that PRNU is specific to each camera and serves as a fingerprint for each device. The variance in PRNU among the devices is caused by sensors’ imperfections that occur during the manufacturing process and use over the lifetime. Lukás et al.’s algorithm is based on calculation of camera’s fingerprint by denoising input image I with filter F and calculating noise residual N = IF(I). The value of N is calculated for each camera separately and serves as unique camera’s fingerprint. Such approach provides a good efficacy of camera recognition but is very time consuming, thus cannot be used at larger scales. The obtained fingerprint is a strong link between a set of pictures and a device, allowing identification of the camera. Moreover, it is not easily removable from the image without a serious loss of image quality. In this paper we examine the robustness of aforementioned algorithm to attempts of breaking the link between the image and the camera. We show that some of the most “intuitive” strategies, such as noising image, Gaussian blurring or removing pixels’ least significant bit does not allow to bypass the Lukás et al.’s camera identification algorithm without degrading picture quality. On the other hand, we show a technique which allows to obscure the camera fingerprint, without significant computational overhead nor deterioration of the image. This work is a continuation of research described in [1] that was presented at SECRYPT 2017 conference.

1.1 Contribution

The contribution of this paper is twofold. First, we show that simple and natural strategies like adding salt and pepper noise, Gaussian blurring or removing least significant bit of pixel intensities do not prevent from camera identification in terms of Lukás et al.’s algorithm. Mentioned methods work only in the case of a significant deterioration of image quality. Second, we propose a method that is capable to bypass camera identification with negligible influence on image quality. Cropping and stretching image to the original quality can effectively “confuse” Lukás et al.’s algorithm. We show that it is sufficient to symmetrically remove just 6 pixels from each edge of the image and upsample it to the original size in order to deceive considered algorithm.

1.2 Organization of the paper

Everywhere in the paper a bold font denotes matrices or vectors. The paper is organized as follows. In Section 2 we refer to some of previous and related works. In Section 3 we give some preliminaries useful in the rest of the paper. In Section 4 we show that Lukás et al.’s algorithm is robust against some basic techniques. In Section 5 we present an adjustment of Lukás et al.’s algorithm for fragment of pictures. Section 6 describes a surprisingly simple protection method that prevents from camera identification. We conclude this work in Section 7.

2 Previous and related work

The issue of camera fingerprinting has been investigated in several aspects. Camera fingerprint can be removed in various ways. The study of PRNU robustness and reliability in matters of forensics is presented [30]. It is possible to counterfeit characteristic features of a camera in order to produce an image “pretending” to be done by another camera. Such approach was presented in [13], where a method for PRNU fingeprint-copy attack is described. The aim is to obfuscate the PRNU of a particular camera by “inserting” into it a PRNU of another camera. Authors show that this can be done by performing some simple algebraic operations. Assume that \(\hat {\mathbf {K}_{\mathbf {N}}}\) is a camera fingerprint calculated of N images and J is a fingerprint of another camera whose we want to put into \(\hat {\mathbf {K}_{\mathbf {N}}}\). Then we have \(\mathbf {J}^{\prime } = \mathbf {J}(1+\alpha {\hat {\mathbf {K}_{\mathbf {N}}}})\), where α > 0 is a scalar fingerprint strength. Extensive experimental verification shows that such “exchange” of camera fingerprints is efficient, i.e. based on \(\hat {\mathbf {K}_{\mathbf {N}}}\) we can produce a fake photo \(\mathbf {J}^{\prime }\) that pretends that camera. Quite similar approach is described in [38]. However, both approaches may not be practical, because it is needed to have a representative image set of camera that we want to “exchange” the fingerprint with and affects the actual image (i.e., the stored information). Moreover, it may be easily detected that there was an image forgery. Such approaches are also very time-consuming.

The PRNU Decompare [28] was a software proposed in 2014 that aimed to anonymize images by manipulating their PRNU. It removes, forges or even destroys the PRNU of the image without a negative impact on images’ quality. However, this software seems to be no longer supported and available.

In [17] a method using Color Filter Array (CFA) in order to create pattern in digital images is presented. Such solution can be used for concealing traces of image manipulation. As an image similarity measure, the Peak signal-to-noise (PSNR) ratio is used. However, proposed solution is appropriate only in case of CFA-based forensics methods. In [18] the camera fingerprint is described as an additive signal to images. It is proposed to remove the fingeprint with a fixed strange. However, better results can be achieved by using various strength, what was showed in [13]. An attempt of removing the sensor information was made by a method of flatfielding which is presented in [21]. In [1] a fast method for camera recognition as well as a method of preventing fingerprint linking in a grayscale images was proposed. The algorithm, based on Peak signal-to-noise ratio, is very fast in comparison with [21], however the classification efficiency is lower.

Recently, more and more approaches begin to use deep learning or convolutional neural networks (CNNs) in order to camera identification. An example of an algorithm based on use of convolutional neural networks is presented in [2]. Proposed algorithm learns features characterizing each camera from acquired images by performing operations through four convolutional layers. Results showed that such strategy outperforms former machine learning-based algorithms on classification of 64 × 64 pixels image patches. In [37] a robust multi-classifier based on CNN was proposed. The main advantage of proposed method is that it is capable to perform identification of multiple camera models in one comparison. The core of the method is similar to [2] i.e. images are pre-processed for patch selection. General accuracy of classification is high and usually reaches about 98%, however for some cameras proposed method achieves very poor results (accuracy of classification at 41%). Moreover, the time performance of this method and also of [2] is not examined in comparison for non-CNNs based algorithms.

Source camera identification based on deep learning is also investigated in [8]. Convolutional layers composed by a set of high pass filters are applied to the input image in order to extract and learn characteristic features. Authors conducted experiments on a set of modern smartphones which are iPhone 5, Samsung Galaxy S4 and Samsung Galaxy Tab II. Accuracy of classification is very high (not least than 98% for each model), however the image dataset based on only three mentioned devices is not representative.

In [24] a framework for iris and camera model identification based on deep learning is discussed. Proposed algorithm uses convolutional neural networks for (1) iris and (2) camera sensor recognition. Proposed solution can be especially useful for the task of double-check user authentication, for instance for iris biometric logging into the authentication system with use of mobile devices.

Another approach using convolutional neural networks is presented in [34]. The network consists of three convolutional layers. The first layer is responsible for calculation of the noise pattern N which is extracted from images from the well-known formula N = IF(I) [13, 16, 21]. I denotes the input image and F is the denoising filter. Noise patterns are treated with 64 kernels of size 3 × 3 and the size of produced feature maps is 126 × 126. Second layer produces feature maps of size 64 × 64. Third layer applies convolutions with 32 kernels of size 3 × 3. The Rectified Linear Unit (ReLU) is applied to the output of every convolutional layer as an activation function. However, learning image features from denoising was proposed by Lukás et al. in 2006 which in this paper we consider as state-of-the-art [21]. Time for learning the network and image classification is not examined.

In [33] an analysis of image features effect on PRNU is described. In order to increase the quality of PRNU, a weighting function is proposed. Image regions which give reliable and unreliable PRNU are estimated and the weighting function gives weights for pixel-based correlation. Higher weights are assigned to image regions which provide reliable PRNU; smaller weights are assigned to regions giving less reliable PRNU. Experiments showed that proposed technique correctly described image features, what as a result slightly improved the accuracy of camera identification compared to [21].

In [15] a study on improving camera identification using PRNU is considered. It is proposed to extract PRNU from image separately, applied on low and high frequency components of the image. Results from the Dresden Image Database [12] showed that PRNU obtained with proposed method contain less amount of high frequency details of the images. However, it is not clearly shown, how such change improves the camera identification.

In [19] a compact representation of sensor fingerprint is introduced in order to reduce fingerprint processing computational costs. Proposed framework compresses the sensor pattern noises what significantly speeds up their processing time. The compact SPN representation can be obtained directly from a high-dimensional SPN. Experiments are held on Dresden Image Database [12] and reveal that compact representation of the fingerprint does not decrease the identification accuracy.

In [25] the mismatches of correlation values of multiple PRNUs are analyzed. Author provides a very simple formula that predicts the standard deviation of the mismatch of correlation values if they are close to 0 but do not belong to the particular camera. Proposed method is expected to achieve reliable results in camera identification, however it is not clearly described, how proposed method could be used to improve the camera identification accuracy.

In [36] a content-adaptive fusion residual networks is described. Images are divided into three categories: saturation, smoothness and others. For each category a fusion residual network is trained by transform learning. Experimental evaluation on Dresden Image Database [12] confirms high accuracy of proposed approach. In [29] the DenseNet convolutional network is proposed for camera identification. Deep learning for camera recognition is also used in [9], where a fully connected network is discussed.

Some denoising models introduce the so called attention mechanisms, therefore may be related with camera recognition. An attention model outputs the “summary” of the input vector or matrix. Attention models are also used for detecting salient objects in the images. Examples can be found in [6, 10, 20].

In [22] the vulnerability of deep learning approach to adversarial attacks for source camera identification is studied. It is considered how to produce fake photos in order to make a CNN-based classifier incorrectly classifying cameras. Several attacks are proposed where the image undergoes a lossless or a lossy compression. Sample attack scenarios include the following: the Fast Gradient Sign Method (FGSM) [14], DeepFool [26] and Jacobian-based Saliency Map Attack (JSMA) [27]. The idea of FGSM is quite simple and relies on adding an additive noise. However, in some case image quality can be visibly affected, therefore the DeepFool was introduced to overcome this problem. DeepFool is based on a local linearization of the classifier under attack. The JSMA is a greedy iterative procedure that computes a saliency map at each iteration and then marks the pixels that contribute most to the correct classification of the image. Experiments showed that applying such attacks efficiently deceive a CNN-based classifier.

3 Preliminaries

Below, we briefly recall the state-of-the-art camera identification algorithm we aim at protecting against and indicate most important literature referring to it. Then, we describe the image datasets that we have used in benchmarking our algorithms with.

3.1 Lukás et al.’s algorithm

Although the Lukás et al.’s algorithm was presented in 2006, to the best of our knowledge it is still the state-of-the-art method for camera identification and is widely considered as a reference point. There are a lot of research and patents that are based on this approach such as, among others [3, 16, 24, 38]. Also many recent approaches [23, 34] based on deep learning and convolutional neural networks use the denoising formula presented in this paper. The idea is to calculate for each camera the Photo-Response Nonuniformity Noise (PRNU), which is also called “noise residual” on a certain number of “training” images. Calculated noise residuals of images are averaged. The classification of a new photo is performed by calculating the correlation value between averaged noise residuals of a camera and noise residual of a new photo. Typically, the correlation value of image made by a particular camera is at least 0.01, on the other hand, if the image was not taken by a particular camera, the correlation value is usually much smaller, for example 0.002 or 0.0001. Therefore, the correlation already of ≥ 0.01 reveals that the picture matches the camera. This strategy is presented in Fig. 1. Formal description of Lukás et al.’s algorithm is presented in the Appendix.

Fig. 1
figure 1

Scheme of denoising images and classification of a new image [31]

3.2 Image datasets

We have conducted experiments on two datasets of images.

Dataset I

First dataset consist of modern devices such as smartphones, drones, compact cameras or DSLRs. The following devices were used: smartphones (Acer Liquid Jade S, Apple iPhone 5S, LG K10, Samsung Galaxy S7 and Samsung Galaxy Tab A (2016) tablet); compact cameras: Canon SX160, Canon SX270, Nikon P100); drones (DJI Spark, Yuneec Breeze 4K) and two DSLRs (Nikon D3100 and D7200). All devices contain CMOS imaging sensor expect Canon SX160 (CCD). Image resolutions are from 8 to 14 megapixels for nearly all devices and 24 megapixels for Nikon D7200. There were 45 images per each device used.

Dataset II – The Dresden Image Database

For a high-scale experiments, we have also used a large, public set of JPG images from the Dresden Image Database [12]. This database consists of tens of thousands images made by dozens of cameras and is often used for research or as a benchmark [5, 11]. We have used 11787 images from 48 cameras. The used cameras include: Agfa DC 733s, Agfa DC 830i, Agfa Sensor 505, Agfa Sensor 530s, Canon Ixus 55, Canon Ixus 70 (3 devices), Casio EX Z150 (5 devices), Kodak M1063 (5 devices), Nikon CoolPix S710 (5 devices), Nikon D70 (2 devices), Nikon D70s (2 devices), Nikon D200 (2 devices), Olympus 1050SW (5 devices), Praktica DCZ5 (5 devices), Rollei RCP 7325XS (3 devices), Samsung L74 (3 devices) and Samsung NV15 (3 devices). In the paper we aim to provide a method preserving users’ privacy, hence we treat each instance of a specific camera separately (for example we distinguish between Canon Ixus 70 (1) and Canon Ixus 70 (2)).

4 Robustness of Lukás et al.’s algorithm against typical method of removing characteristic features

In this section we examine the robustness of the PRNU-based method against some basic image processing operations. We will check if the correlation value (introduced in Preliminaries section) between the calculated camera’s noise residual (which in further part of this paper we will call fingerprint) and images after some transformations remain in the pattern indicating the photo’s original device.

It turns out that many methods that intuitively should remove characteristic features of the camera does not work without re-iterating the processing (or increasing the parameter values) to the point of significant degradation of the quality of processed image. Therefore, the image may be useless for a wide range of applications, and Lukás et al.’s algorithm still allows for a correct linking between the image and the camera. In particular, we consider adding random noise to the images (also known as salt and pepper noise), using Gaussian blur and also removing pixels’ least significant bit (LSB), i.e. setting the least significant bit of each of the pixels’ intensity for each of the color channel to 0, independently of the previous value. The results show that even though the methods are applied until the quality of the image is strongly affected, the Lukás et al.’s algorithm still correctly identifies the camera.

4.1 Adding salt and pepper noise

We consider images represented in the RGB model, where pixel values for each color channel \(\mathcal {R}\) (red), \(\mathcal {G}\) (green) and \({\mathscr{B}}\) (blue) take values from [0,255]. We propose to replace k pixels in the image, in a manner that k/2 pixels in the picture will be set to 0 and k/2 pixels to 255 in a random way. We examine the robustness of Lukás et al.’s algorithm against the procedure, measured as a correlation value peak for a correct camera’s fingerprint, i.e., possibility of correct linking the noised image with the camera. We used MATLAB salt & pepper noise, implemented in imnoise function with the noise parameter k (which is probability of noise density) ranging from 0.001 to 1.0. Experiments were conducted for Dataset I and Dresden Image Database, referred to in Section 1. Mean correlation values calculated of all cameras can be seen in Fig. 2; also in Tables 1 and 2.

Fig. 2
figure 2

Mean correlation values (Y -axis) according to increase of the salt and pepper density (X-axis)

Table 1 Correlation values for next densities of salt and pepper noise, Dataset I
Table 2 Correlation values for next densities of salt and pepper noise, Dresden Image Database

The results indicate that the fingerprint obfuscation is achieved at least at probability k = 0.2 (Dresden Image Database) and k = 0.5 (Dataset I). The correlation values are equal 0, but the image quality is strongly affected as depicted in Fig. 3.

Fig. 3
figure 3

On the left: the original image (top) and image with random points noise at probability 0.2 (bottom). Camera: Kodak M1063. On the right: original image (top) and image with Gaussian blur at σ = 0.001. Camera: Samsung NV15

Results indicate that image quality has to be strongly affected to achieve correlation value at 0.00. The probability of occurence of random points in the image has to be not smaller than 0.2. Therefore, one may not assume the usability of such approach.

4.2 Gaussian blurring

Gaussian smoothing, often referred to as Gaussian blur, is a standard and commonly used filter that is based on the normal distribution function (also named Gaussian kernel) with variance σ and mean equal 0. The σ defines the strength of blurring [7]. In our experiments, the goal is to convolve the Gaussian kernel with the input image I in order to calculate the output image \(\mathbf {I}^{\prime }\). Filter decreases the contrast between pixels, therefore it makes colors in image more “soft”.

The experiments for image blurring were conducted with σ parameter equal 0.001, 0.01, 0.1, 0.2, 0.3.

The output image quality is comparable with that of the original image. Obviously, increasing the value of σ makes image quality worse and surprisingly, in some cases it also increases the correlation values. Thus such approach can not be used for a large-scale processing images in order to remove the fingerprint. Results are presented in Tables 3 and 4. The best results were achieved for σ = 0.001, since the correlation values for nearly all cameras were equal to 0.00, however, in some cases (DJI Spark, Agfa DC 733s and Agfa DC 830i) the correlations were as much as 0.1. In Fig. 3 can be seen a result of Gaussian blurring the image with σ = 0.001.

Table 3 Correlation values for Gaussian blurring, Dataset I
Table 4 Correlation values for Gaussian blurring, Dresden Image Database

To sum up, Gaussian blurring cannot be considered as a stable solution for breaking the link between image and camera. Although image quality is better than in case of adding salt and pepper noise, the degradation is still noticeable. The correlation values for most values of parameter of Gaussian blurring are not small enough that one may assume the unlinkability between camera and the image.

4.3 Removing least significant bit

Yet another strategy is to remove pixels’ least significant bit, namely setting the least significant bit of each of the pixels’ intensity for each of the color channel to 0. For each pixel a LSB is removed with probability p, equal 0.001, 0.125, 0.25, 0.5 and 1.0. However, the results indicate that removing LSB does not decrease the correlation values for both datasets (Fig. 4).

Fig. 4
figure 4

Mean correlation values (Y -axis) according to increase probability of pixels with removed LSB (X-axis)

Increasing the number of pixels with removed LSBs does not significantly decrease the correlation values, as the correlations even at level of 0.01 can allow camera identification. Of course, the image quality is strongly degraded as the example in Fig. 5 depicts.

Fig. 5
figure 5

On the left: the original image I; on the right: the image \(\mathbf {I}^{\prime }\) with LSB removed from all pixels. The mean of D = |I - \(\mathbf {I}^{\prime }|\) equals 59.14, standard deviation of D is 31.82, and the median is 61. Camera: Rollei RCP 7325XS

As a conclusion it can be stated that removing least significant bit is not a reasonable operation for breaking the link between image and the camera, as the correlation values still stays in almost all cases at high values. Moreover, the image quality must be strongly affected, what makes image useless.

4.4 Numerical analysis of the deformation

We have analyzed the similarities of the original image I and noised images \(\mathbf {I}^{\prime }\). The residuum D = |I - \(\mathbf {I}^{\prime }|\) is a matrix of absolute values of pixel differences between original and noised image. Such residua have been calculated both for adding salt and pepper noise and Gaussian blur. Obviously, the higher random noise points’ density, the worse image quality. The minimal probability for fingerprint obfuscation is 0.5 (Dataset I) and 0.2 (Dresden Image Database) but the image quality is strongly degraded. Simultaneously, in such case in nearly all cameras the correlation value is about 0.00. In case of Gaussian blurring, for most cameras sufficient probability is 0.001, where the image quality is relatively good. Detailed results about image quality are presented in Figs. 6 and 7.

Fig. 6
figure 6

Average difference, standard deviation and medians of pixel intensities for each tested camera between images D = |I - \(\mathbf {I}^{\prime }|\) for salt and pepper noise. I stands for original image, \(\mathbf {I}^{\prime }\) is noised image

Fig. 7
figure 7

Average difference, standard deviation and medians of pixel intensities for each tested camera between images D = |I - \(\mathbf {I}^{\prime }|\) in Gaussian blur. I stands for original image, \(\mathbf {I}^{\prime }\) is blurred image

It is clearly visible, and quite intuitive, that increasing the density of salt and pepper decreases the correlation values between image with the original fingerprint. However, it also confirms that simple image deterioration by adding noise cannot successfully change the fingerprint in the manner that prevents from linking the camera with an image in terms of Lukás et al.’s algorithm.

4.5 Summary

In this section we have examined simple techniques for image noising in order to break the link between image and the camera in terms of Lukás et al.’s algorithm. We have investigated the following methods: adding random points to the image (also known as salt and pepper noise), Gaussian blurring and removing least significant bit. The analysis indicated that none of these techniques provided reasonable results in fooling Lukás et al.’s algorithm. The worst results were obtained with salt & pepper noise and removing least significant bit, where image has to be strongly degraded to be not recognized by analyzed algorithm. Quite better results are observed in case of Gaussian blurring, where image is relatively acceptable quality. However, this approach seems to be not stable, because for some values of the parameter of Gaussian blurring recognition of the camera still remains strong.

5 Adjustment of Lukás et al.’s algorithm for fragments of pictures

We have analyzed the impact of dividing the image into sections to the possibility of camera identification. We have stated a question if it is possible to adopt Lukás et al.’s algorithm if we have only a part of the picture (e.g a rectangle-shaped fragment of size \(N^{\prime }\times M^{\prime }\)) for some \(N^{\prime }\leq N\) and \(M^{\prime } \leq M\). Let us divide the image for nine sections, as it is shown in Fig. 8. The experiments have been conducted as follows. The fingerprint of 100 images have been calculated only for section A1. Then, we took an image not belonging to this fingerprint and we have divided it into nine sections: A1, A2, …, A9, and followed by calculating the correlation values between the fingerprint of section A1 and image all sections.

Fig. 8
figure 8

Sample division of the image into sections

Results showed in Fig. 9 indicate that the correlation values between fingerprint and the same image section are high enough to assume that a section of the image belongs to the same camera. However, correlations of other sections in nearly all cases do not allow for reasonable fingerprint identification. Correlation values are in general at the level of 0.00, in some cases at most 0.01. Detailed results are presented in Tables 5 and 6.

Fig. 9
figure 9

Correlation values for next image sections (mean of all cameras of all datasets)

Table 5 Correlation values between camera’s fingerprint and image sections; Dataset I
Table 6 Correlation values between camera’s fingerprint and image sections; Dresden Image Database

For a smaller picture this approach works in a completely analogical way, although the correlation values are slightly smaller. To conclude, camera may be recognized by image sections only by analysis of the corresponding sections of the image. The correlation values between different image sections are low enough to be considered as made with a particular camera.

6 Efficient quality-preserving prevention against Lukás et al.’s algorithm

As shown in Section 4, Lukás et al.’s algorithm is robust against many basic image processing methods aiming to break the link between an image and the device of its origin, however it is possible to break that link without huge losing the quality of pictures.

We have analyzed the possibility of camera identification in case of cropping the photo and stretching it to the original size by the following operations.Input: Image I in RGB of size M × N; Output: Cropped image \(\mathbf {I^{\prime }}\) in RGB of size M × N.

  1. 1.

    Crop the image by a certain number of pixels;

  2. 2.

    Upsample cropped photo to the original size using the Lanczos resampling algorithm.

Note that the final size of the processed image has to be the same as the original image. The requirement is first and foremost important for the PRNU-based algorithm, as it works only for the images of the same size, however it is important as well due to more practical reason – namely the adversary may easily check that the resolution does not match any commercially available camera, thus it was tampered with. As shown in [32], the method of resampling the image may affect the image significantly. The Lanczos kernel was chosen for its negligible to the observer deterioration and introducing some artifacts that may influence PRNU. The idea of Lanczos resampling method is recalled in the Appendix.

Experiments

We have symmetric removed from 1 to 20 pixels from each edge of the photo and upsampled it to the original size. Obviously, it is associated with a slight loss of information, however there is no significant quality degradation. By cropping more pixels, the correlation values are gradually decreasing, while some objects located near the borders might be removed from the photo. Experiments showed that cropping each edge of a photo by 6 pixels prevents from identifying the original camera’s fingerprint as the correlation results of Lukás et al.’s algorithm is equal to 0.00. Mean values of correlation values from devices of all datasets are shown in Fig. 10.

Fig. 10
figure 10

Correlation values for cropping next pixels (mean of all cameras)

Cropping and “stretching” the image to its original size is very simple and fast. Moreover, cropping a small number of pixels does not degrade the picture quality or disturbs the geometry of the photographed shapes, one can interpret the resulting image as a photo that is slightly zoomed in comparison to the original. An example of an image after the procedure is presented in Fig. 11. General comparison of images quality is shown in Fig. 12.

Fig. 11
figure 11

In the left: the original photo of resolution 2592 × 1944px, in the middle cropped along dimensions by 20px (5 pixels from each edge), in the right: cropped and stretched to the original size. Camera: Agfa Sensor 505

Fig. 12
figure 12

Average difference, standard deviation and medians of pixel intensities for each tested camera between images D = |I - \(\mathbf {I}^{\prime }|\) in cropping experiments. I stands for original image, \(\mathbf {I}^{\prime }\) is cropped image

It is important to note, that a procedure consisting of multiple iterations of Lanczos resampling (upsampling and downsampling) alone, i.e., without the previous cropping, also decreases the correlation values between the camera and the image (while comparing with the respective original image), however it does not always result in correlation value equal to zero. This means that in such cases camera can be successfully identified. Precise results are presented in Table 7.

Table 7 Correlation values for images resampled using Lanczos kernel (without cropping); all datasets

Summary

Results show that procedure of cropping the image already of 6 pixels by each edge and upsampling it to the original resolution defeats camera recognition of Lukás et al.’s algorithm. Such strategy is fast and almost imperceptible due to slight loss of original image information. We have also shown that multiple use of Lanczos resampling algorithm (for upsampling and downsampling the image without cropping) prevents from camera recognition. In this case the image quality is also not affected, what makes image useful.

7 Conclusion

In this paper we focus on the topic of preserving the privacy of digital camera users, namely linking the photos with the device of its origin. We have shown that some of the typical methods are not sufficient for breaking the link between a photo and camera’s fingerprint calculated as in widely used, state-of-the-art Lukás et al.’s algorithm. In order to achieve small correlation values between the image and the camera using those methods, it is necessary to significantly reduce the image quality. However, we have proposed a method which can successfully obscure the camera fingerprint. The method based on image cropping and resampling is showed to be efficient in breaking the link between the image and the device. Moreover, we have also analyzed the division of the image into sections to the possibility of camera identification.

As a future work, we plan to extend the experiments of camera identification by image sections. Such strategy is feasible as long as we know that the particular section of the image corresponds to the fingerprint of such section. The open problem is to identify the camera without particular knowledge of “location” of the fingerprint in the image. Another aspect to investigate concerns robustness of classification methods based on deep learning and convolutional neural networks. In particular, we are going to perform analogous experiments in order to check whether it is possible to “confuse” classification of CNNs-based approach by proposed methods.