Watermark Optimization
Reducing the Processing Time of the Hierarchical Watermark Detector When Applied to Unmarked ImagesRead Paper
In this paper, we improve the performance of the hierarchical detector we proposed in [1] for real-time software or lowcost hardware implementation. Although the original hierarchical detector is faster than sub-sampled brute force-base detector when processing marked images, it unnecessarily continues to process unmarked images looking for a watermark that is not present. This processing is time-consuming; hence, it represents a significant deployment obstacle. The improved detector, however, avoids most of the processing of the unmarked areas of an image by exploiting the presence of a reference signal usually included with the embedded watermark. This reference signal enables the detector to synchronize the image after it has been subjected to a geometric transformation (scaling, rotation, and translation). The improved detector refrains from searching an image area any further whenever the level of the reference signal is very weak or the estimated scale factors and rotation angles associated with this reference signal are not consistent among the processed blocks within the same layer in the hierarchy. The proposed detector has been implemented, and the experimental results indicate that the proposed detector is computationally more efficient with unmarked images, while achieving a detection rate similar to that of the original hierarchical detector.
Read the Full PaperA fast hierarchical watermark detector for real-time software or low-cost hardware implementationRead Paper
In this paper, we develop a spread spectrum-based watermark algorithm for real-time software or low-cost hardware implementation. The developed detector is suitable for devices such as stand-alone watermark readers, cellular phones, and PDAs. These devices have primitive operating systems with limited processing power, memory, and system bandwidth. Our embedder tiles the watermark over the host image to let the watermark be detected from any region in the digital or printed watermarked image as the data is streamed through the device. It also adapts the watermark strength locally to maximize detection and minimize watermark visibility. Consequently, the watermark may be detectable only in few regions of the image that are not necessarily aligned with the original tile boundaries. To avoid a brute-force search, our detector uses a hierarchical search algorithm to quickly zoom into the region with the strongest watermark. This approach permits a real-time software implementation of the detector and reduces the necessary gate count, on-chip memory, and system bandwidth for a hardware implementation. Software simulation results of the developed algorithm indicate that the algorithm is very efficient and the detection results are better than those obtained using a sub-sampled brute-force search.
Read the Full PaperVideo & Compressed Domain
Evaluation of watermarking low-bit-rate MPEG-4 bit streamsRead Paper
A novel watermarking algorithm for watermarking low bit-rate MPEG-4 compressed video is developed and evaluated in this paper. Spatial spread spectrum is used to invisibly embed the watermark into the host video. A master synchronization template is also used to combat geometrical distortion such as cropping, scaling, and rotation. The same master synchronization template is used for watermarking all video objects (VOP) in the bit-stream, but each object can be watermarked with a unique payload. A gain control algorithm is used to adjust the local gain of the watermark, in order to maximize watermark robustness and minimize the impact on the quality of the video. A spatial and temporal drift compensator is used to eliminate watermark self-interference and the drift in quality due to AC/DC prediction in I-VOPs and motion compensation in P- and B-VOPs, respectively. Finally, a bit-rate controller is used to maintain the data-rate at an acceptable level after embedding the watermark. The developed watermarking algorithm is tested using several bit-streams at bit-rates ranging from 128-750 Kbit/s. The visibility and the robustness of the watermark after decompression, rotation, scaling, sharpening, noise reduction, and trans-coding are evaluated.
Read the Full PaperDigital watermarking of low bit-rate advanced simple profile MPEG-4 compressed videoRead Paper
A novel MPEG-4 compressed domain video watermarking method is proposed and its performance is studied at video bit rates ranging from 128 to 768 kb/s. The spatial spread-spectrum watermark is embedded directly into compressed MPEG-4 bitstreams by modifying DCT coefficients. A synchronization template combats geometric attacks, such as cropping, scaling, and rotation. The method also features a gain control algorithm that adjusts the embedding strength of the watermark depending on local image characteristics, increasing watermark robustness or, equivalently, reducing the watermark's impact on visual quality. A drift compensator prevents the accumulation of watermark distortion and reduces watermark self-interference due to temporal prediction in inter-coded frames and AC/DC prediction in intra-coded frames. A bit-rate controller maintains the bit rate of the watermarked video within an acceptable limit. The watermark was evaluated and found to be robust against a variety of attacks, including transcoding, scaling, rotation, and noise reduction.
Read the Full PaperReal-time application of digital watermarking to embed tactical metadata into full motion video captured from unmanned aerial systemsRead Paper
A persistent challenge with imagery captured from Unmanned Aerial Systems (UAS), is the loss of critical information such as associated sensor and geospatial data, and prioritized routing information (i.e., metadata) required to use the imagery effectively. Often, there is a loss of synchronization between data and imagery. The losses usually arise due to the use of separate channels for metadata, or due to multiple imagery formats employed in the processing and distribution workflows that do not preserve the data. To contend with these issues and provide another layer of authentication, digital watermarks were inserted at point of capture within a tactical UAS. Implementation challenges included traditional requirements surrounding, image fidelity, performance, payload size, robustness and application requirements such as power consumption, digital to analog conversion and a fixed bandwidth downlink, as well as a standard-based approach to geospatial exploitation through a serviceoriented- architecture (SOA) for extracting and mapping mission critical metadata from the video stream. The authors capture the application requirements, implementation trade-offs and ultimately analysis of selected algorithms. A brief summary of results is provided from multiple test flights onboard the SkySeer test UAS in support of Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance applications within Network Centric Warfare and Future Combat Systems doctrine.
Read the Full PaperCommunication Theory & Quantization Index Modulation
Comparative Performance of Watermarking Schemes Using M-Ary Modulation with Binary Schemes Employing Error Correction CodingRead Paper
A common application of digital watermarking is to encode a small packet of information in an image, such as some form of identification that can be represented as a bit string. One class of digital watermarking techniques employs spread spectrum like methods where each bit is redundantly encoded throughout the image in order to mitigate bit errors. We typically require that all bits be recovered with high reliability to effectively read the watermark. In many watermarking applications, however, straightforward application of spread spectrum techniques is not enough for reliable watermark recovery. We therefore resort to additional techniques, such as error correction coding. As proposed by M. Kutter 1, M-ary modulation is one such technique for decreasing the probability of error in watermark recovery. It1 was shown that M-ary modulation techniques could provide performance improvement over binary modulation, but direct comparisons to systems using error correction codes were not made. In this paper we examine the comparative performance of watermarking systems using M-ary modulation and watermarking systems using binary modulation combined with various forms of error correction. We do so in a framework that addresses both computational complexity and performance issues.
Read the Full PaperHost-Aware Spread Spectrum Watermark Embedding TechniquesRead Paper
This paper explores techniques that involve the use of the embedder's knowledge of the cover work to help determine the watermark signal to be added to it. While the receiver always seeks to maximize a detection statistic which is a function of an apriori known pseudorandom sequence, the signal added to the cover work by the embedder is allowed to vary, on a per-chip basis, based upon the characteristics of the cover work. Although adaptation of an added watermark signal can be aimed at minimization of visual artifacts, this paper focuses on adaptation of the watermark signal to improve the readability of the signal outside of any human visual system constraints. This idea can be applied in various scenarios. Two specific examples are discussed. When source models are available and maximum-likelihood detection is used, the added watermark signal can be allowed to adapt to host signal variations in order to maximize the likelihood ratio detection statistic used at the receiver. Another instance where per-chip variation can be put to use is when a pre-filter is used to suppress the cover work prior to reading the watermark signal. In this case, the watermark signal is varied in such a way as to maximize the signal at the output of the pre-filter.
Read the Full PaperNew wrinkle in dirty paper techniquesRead Paper
The many recent publications that focus upon watermarking with side information at the embedder emphasize the fact that this side information can be used to improve practical capacity. Many of the proposed algorithms use quantization to carry out the embedding process. Although both powerful and simple, recovering the original quantization levels, and hence the embedded data, can be difficult if the image amplitude is modified. In our paper, we present a method that is similar to the existing class of quantization-based techniques, but is different in the sense that we first apply a projection to the image data that is invariant to a class of amplitude modifications that can be described as order preserving. Watermark reading and embedding is done with respect to the projected data rather than the original. Not surprisingly, by requiring invariance to amplitude modifications we increase our vulnerability to other types of distortions. Uniform quantization of the projected data generally leads to non-uniform quantization of the original data, which in turn can cause greater susceptibility to additive noise. Later in the paper we describe a strategy that results in an effective compromise between invariance to amplitude modification and noise susceptibility.
Read the Full PaperQuantizer characteristics important for quantization index modulationRead Paper
Quantization Index Modulation (QIM) has been shown to be a promising method of digital watermarking. It has recently been argued that a version of QIM can provide the best information embedding performance possible in an information theoretic sense. This performance can be demonstrated via random coding using a sequence of vector quantizers of increasing block length, with both channel capacity and optimal rate-distortion performance being reached in the limit of infinite quantizer block length. For QIM, the rate-distortion performance of the component quantizers is unimportant. Because the quantized values are not digitally encoded in QIM, the number of reconstruction values in each quantizer is not a design constraint, as it is in the design of a conventional quantizer. The lack of a rate constraint in QIM suggests that quantizer design for QIM involves different considerations than does quantizer design for rate-distortion performance. Lookabaugh has identified three types of advantages of vector quantizers vs. scalar quantizers. These advantages are called the space-filling, shape, and memory advantages. This paper investigates whether all of these advantages are useful in the context of QIM. QIM performance of various types of quantizers is presented and a heuristic sphere-packing argument is used to show that, in the case of high-resolution quantization and a Gaussian attack channel, only the space-filling advantage is necessary for nearly optimal QIM performance. This is important because relatively simple quantizers are available that do not provide shape and memory gain but do give a space-filling gain.
Read the Full PaperText Watermarking
Watermarking Electronic Text Documents Containing Justified Paragraphs and Irregular Line SpacingRead Paper
In this paper, we propose a new method for watermarking electronic text documents that contain justified paragraphs and irregular line spacing. The proposed method uses a spread-spectrum technique to combat the effects of irregular word or line spacing. It also uses a BCH (Bose-Chaudhuri-Hocquenghem) error coding technique to protect the payload from the noise resulting from the printing and scanning process. Watermark embedding in a justified paragraph is achieved by slightly increasing or decreasing the spaces between words according to the value of the corresponding watermark bit. Similarly, watermark embedding in a text document with variable line spacing is achieved by slightly increasing or decreasing the distance between any two adjacent lines according to the value of the watermark bit. Detecting the watermark is achieved by measuring the spaces between the words or the lines and correlating them with the spreading sequence. In this paper, we present an implementation of the proposed algorithm and discuss its simulation results.
Read the Full PaperSynchronization Techniques
Watermark re-synchronization using log-polar mapping of image autocorrelationRead Paper
Many watermarking algorithms embed the watermark into the image as contiguous non-overlapping tiles. This tiling structure forms an implicit synchronization template that can be revealed through autocorrelation. This template is composed of a set of weak peaks, replicating the relative position of the watermark tiles. Hence, synchronization can be resolved by comparing the actual locations of these peaks to the theoretical ones to determine the scaling factor and the orientation angle of the tiles. Unfortunately, these peaks are very weak and measuring their locations directly is not easy. In this paper, a log-polar mapping of the synchronization template is computed to convert the scaling factor and the rotation angle of the template into vertical and horizontal shifts. These shifts are then detected using a Phase-Only-Matched filter (POM), which concentrates the weak energy from all peaks into a single peak that is much easier to detect. The scaling factor and orientation angle are determined from the location of this peak. Simulation results of this method have shown that this method is very effective and produces accurate results.
Read the Full PaperSteganography
“Break Our Steganographic System” — the ins and outs of organizing BOSSRead Paper
This paper summarizes the first international challenge on steganalysis called BOSS (an acronym for Break Our Steganographic System). We explain the motivations behind the organization of the contest, its rules together with reasons for them, and the steganographic algorithm developed for the contest. Since the image databases created for the contest significantly influenced the development of the contest, they are described in a great detail. Paper also presents detailed analysis of results submitted to the challenge. One of the main difficulty the participants had to deal with was the discrepancy between training and testing source of images – the so-called cover-source mismatch, which forced the participants to design steganalyzers robust w.r.t. a specific source of images. We also point to other practical issues related to designing steganographic systems and give several suggestions for future contests in steganalysis.
Read the Full PaperGibbs Construction in SteganographyRead Paper
We make a connection between steganography design by minimizing embedding distortion and statistical physics. The unique aspect of this work and one that distinguishes it from prior art is that we allow the distortion function to be arbitrary, which permits us to consider spatially-dependent embedding changes. We provide a complete theoretical framework and describe practical tools, such as the thermodynamic integration for computing the rate–distortion bound and the Gibbs sampler for simulating the impact of optimal embedding schemes and constructing practical algorithms. The proposed framework reduces the design of secure steganography in empirical covers to the problem of finding local potentials for the distortion function that correlate with statistical detectability in practice. By working out the proposed methodology in detail for a specific choice of the distortion function, we experimentally validate the approach and discuss various options available to the steganographer in practice.
Read the Full PaperHistogram Layer, Moving Convolutional Neural Networks Towards Feature-Based SteganalysisRead Paper
Feature-based steganalysis has been an integral tool for detecting the presence of steganography in communication channels for a long time. In this paper, we explore the possibility to utilize powerful optimization algorithms available in convolutional neural network packages to optimize the design of rich features. To this end, we implemented a new layer that simulates the formation of histograms from truncated and quantized noise residuals computed by convolution. Our goal is to show the potential to compactify and further optimize existing features, such as the projection spatial rich model (PSRM).
Read the Full PaperImperfect Stegosystems – Asymptotic Laws and Near-Optimal Practical ConstructionsRead Paper
Steganography is an art and science of hidden communication. Similarly as cryptography, steganography allows two trusted parties to exchange messages in secrecy, but as opposed to cryptography, steganography adds another layer of protection by hiding the mere fact that any communication takes place in a plausible cover traffic. Corresponding security goal is thus the statistical undetectability of cover and stego objects studied by steganalysis — a counterpart to steganography. Ultimately, a stegosystem is perfectly secure if no algorithm can distinguish its cover and stego objects. This dissertation focuses on stegosystems which are not truly perfectly secure — they are imperfect. This is motivated by practice, where all stegosystems build for real digital media, such as digital images, are imperfect. Here, we present two systematic studies related to the secure payload loosely defined as the amount of payload, which can be communicated at a certain level of statistical detectability. The first part of this dissertation describes a fundamental asymptotic relationship between the size of the cover object and the secure payload which is now recognized as the Square-root law (SRL). Contrary to our intuition, secure payload of imperfect stegosystems does not scale linearly but, instead, according to the square root of the cover size. This law, which was confirmed experimentally, is proved theoretically under very mild assumptions on the cover source and the embedding algorithm. For stegosystems subjected to the SRL, the amount of payload one is able to hide per square root of the cover size, called the root rate, leads to new definition of capacity of imperfect stegosystems. The second part is devoted to a design of practical embedding algorithms by minimizing the statistical impact of embedding. By discovering the connection between steganography and statistical physics, the Gibbs construction provides a theoretical framework for implementing and designing such embedding algorithms. Moreover, we propose a general solution for implementing the embedding algorithms minimizing the sum of distortions over individual cover elements in practice. This solution, called the Syndrome-trellis code (STC), achieves near-optimal performance over wide class of distortion functions.
Read the Full PaperJPEG-Phase-Aware Convolutional Neural Network for Steganalysis of JPEG ImagesRead Paper
Detection of modern JPEG steganographic algorithms has traditionally relied on features aware of the JPEG phase. In this paper, we port JPEG-phase awareness into the architecture of a convolutional neural network to boost the detection accuracy of such detectors. Another innovative concept introduced into the detector is the “catalyst kernel” that, together with traditional high-pass filters used to pre-process images allows the network to learn kernels more relevant for detection of stego signal introduced by JPEG steganography. Experiments with J-UNIWARD and UEDJC embedding algorithms are used to demonstrate the merit of the proposed design.
Read the Full PaperMinimizing Additive Distortion in Steganography using Syndrome-Trellis CodesRead Paper
This paper proposes a complete practical methodology for minimizing additive distortion in steganography with general (non-binary) embedding operation. Let every possible value of every stego element be assigned a scalar expressing the distortion of an embedding change done by replacing the cover element by this value. The total distortion is assumed to be a sum of per-element distortions. Both the payload-limited sender (minimizing the total distortion while embedding a fixed payload) and the distortion-limited sender (maximizing the payload while introducing a fixed total distortion) are considered. Without any loss of performance, the non-binary case is decomposed into several binary cases by replacing individual bits in cover elements. The binary case is approached using a novel syndromecoding scheme based on dual convolutional codes equipped with the Viterbi algorithm. This fast and very versatile solution achieves state-of-the-art results in steganographic applications while having linear time and space complexity w.r.t. the number of cover elements. We report extensive experimental results for a large set of relative payloads and for different distortion profiles, including the wet paper channel. Practical merit of this approach is validated by constructing and testing adaptive embedding schemes for digital images in raster and transform domains. Most current coding schemes used in steganography (matrix embedding, wet paper codes, etc.) and many new ones can be implemented using this framework.
Read the Full PaperMoving Steganography and Steganalysis from the Laboratory into the Real WorldRead Paper
There has been an explosion of academic literature on steganography and steganalysis in the past two decades. With a few exceptions, such papers address abstractions of the hiding and detection problems, which arguably have become disconnected from the real world. Most published results, including by the authors of this paper, apply “in laboratory conditions” and some are heavily hedged by assumptions and caveats; significant challenges remain unsolved in order to implement good steganography and steganalysis in practice. This position paper sets out some of the important questions which have been left unanswered, as well as highlighting some that have already been addressed successfully, for steganography and steganalysis to be used in the real world.
Read the Full PaperSteganography with Statistical Models of Image Noise ResidualsRead Paper
Steganography alters innocuously looking cover objects in order to communicate in secrecy. This manuscript focuses on steganography in digital images, arguably the most popular and most studied cover objects. The current focus of steganography is on content-adaptive schemes that are realized through minimizing a distortion function designed to focus the attention of the embedding on highly textured regions of images that are hard to model and where the embedding is less detectable. The actual embedding is done through efficient coding schemes. As interesting as this whole paradigm of embedding by minimizing distortion might seem, distortion in not detectability. It is only linked heuristically through the design of the distortion function. One of the contributions of this dissertation is to formulate this problem through statistical hypothesis testing theory by modeling image noise residuals as a sequence of independent and quantized zero-mean Gaussian random variables. Within this model, the most secure steganographic approach is the one that Minimizes the Power of the Most Powerful Detector (MiPOD) built to distinguish between cover and stego objects. To the best of the author’s knowledge, the proposed model-based embedding scheme, MiPOD, is the first embedding scheme of this kind which has a comparable security with respect to current state of the art in content-adaptive steganography. This dissertation also looks into many interesting implications of having a model-based approach for steganography and steganalysis. The model-based detector is used to assess the performance of current featurebased steganalysis schemes and their optimality. A new detectability-limited sender is proposed that adjusts the embedded payload inside each image up to a certain prescribed level of detectability. Furthermore, for the first time, the proposed detector enables us to measure the secure payload size for a single image for a certain prescribed detectability. Recently, it has been shown that the detection power of feature-based steganalysis can be improved by reducing the redundancy in the extracted feature vectors by focusing the attention of the feature extractor more towards the heavily embedded regions inside each image, hence selection-channelaware feature sets. This dissertation, among other contributions, presents a systematic approach to study the effect of having inaccuracies between steganographer’s activities and steganalyst’s presumed assumptions about those activities, e.g., the embedding payload, and having access to the iv cover source. It is proposed to model these inaccuracies as four different types of Warden with different levels of knowledge about the selection channel to assess the security of state-of-the-art embedding schemes under these different settings. Finally, this dissertation uses the proposed model-based schemes to reformulate the problem of batch steganography and pooled steganalysis. The most powerful detector, aware of the spreading strategies used by Alice inside each communication bag of images, is built as a matched filter and further simplified using a practical estimation approach. Furthermore, three intuitive payload spreading strategies are proposed with roots inside both model-based and content-adaptive steganography.
Using High-Dimensional Image Models to Perform Highly Undetectable SteganographyRead Paper
This paper presents a complete methodology for designing practical and highly-undetectable stegosystems for real digital media. The main design principle is to minimize a suitably-defined distortion by means of efficient coding algorithm. The distortion is defined as a weighted difference of extended state-of-the-art feature vectors already used in steganalysis. This allows us to “preserve” the model used by steganalyst and thus be undetectable even for large payloads. This framework can be efficiently implemented even when the dimensionality of the feature set used by the embedder is larger than 107. The high dimensional model is necessary to avoid known security weaknesses. Although high-dimensional models might be problem in steganalysis, we explain, why they are acceptable in steganography. As an example, we introduce HUGO, a new embedding algorithm for spatial-domain digital images and we contrast its performance with LSB matching. On the BOWS2 image database and in contrast with LSB matching, HUGO allows the embedder to hide 7× longer message with the same level of security level.
Read the Full PaperSignal Rich Art & Style Transfer
Hiding in Plain Sight: Enabling the Vision of Signal Rich ArtRead Paper
Digital watermarking technologies are based on the idea of embedding a data-carrying signal in a semi covert manner in a given host image. Here we describe a new approach in which we render the signal itself as an explicit artistic pattern, thereby hiding the signal in plain sight. This pattern may be used as is, or as a texture layer in another image for various applications. There is an immense variety of signal carrying patterns and we present several examples. We also present some results on the detection robustness of these patterns.
Read the Full PaperSignal rich art: enabling the vision of ubiquitous computingRead Paper
Advances in networking and mobile computing are converging with digital watermarking technology to realize the vision of Ubiquitous Computing, wherein mobile devices can sense, understand, and interact with their environments. Watermarking is the primary technology for embedding signals in the media, objects, and art constituting our everyday surroundings, and so it is a key component in achieving Signal Rich Art: art that communicates its identity to context-aware devices. However, significant obstacles to integrating watermarking and art remain, specifically questions of incorporating watermarking into the process of creating art. This paper identifies numerous possibilities for research in this arena.
Read the Full PaperSignal Rich Art: Improvements and ExtensionsRead Paper
Signal rich art is an application of watermarking technology to communicate information using visible artistic patterns. In this paper we show new methods of generating signal carrying patterns, simplifications of earlier methods, how to embed a vector watermark signal in applications and how to use signal symmetries to expand the detection envelope of a watermark reader.
Read the Full PaperSignal Rich Art: Object Placement, Object Position Modulation and Other AdvancesRead Paper
Signal rich art is an alternative paradigm for watermarking, in which we embed a signal in an image or software application such as a website or app as a visible artistic pattern. In this paper we present a new algorithm for generating signal carrying patterns from a dictionary of objects which we call object placement. In an alternative approach, called object position modulation, we locally perturb the positions of objects in a given pattern to embed the signal. We also present advances in previous techniques.
Read the Full PaperDeepfakes
A System for Mitigating the Problem of Deepfake News Videos Using WatermarkingRead Paper
This paper describes how watermarking technology can be used to prevent the proliferation of Deepfake news. In the proposed system, digital watermarks are embedded in the audio and video tracks of video clips of trusted news agencies at the time the videos are captured or before they are distributed. The watermarks are detected at the social media network’s portals, nodes, and back ends. The embedded watermark imparts a unique identifier to the video, that links it to a blockchain. The watermarks also allow video source tracking, integrity verification, and alteration localization. The watermark detectors can be standalone software applications, or they can be integrated with other applications. They are used to perform three main tasks: (1) they alert the internet user when he watches an inauthentic news video, so that he may discard it, (2) they prevent a Deepfake news video from propagating through the network (3) they perform forensic analysis to help track and remove Deepfake news video postings. The paper includes Proof-of Concept simulation results.
Read the Full PaperAudio Watermarking
High-Capacity, Invertible, Data-Hiding Algorithm for Digital AudioRead Paper
A high-capacity, data-hiding algorithm that lets the user embed a large amount of data in a digital audio signal is presented in this paper. The algorithm also lets the user restore the original digital audio from the watermarked digital audio after retrieving the hidden data. The hidden information can be used to authenticate the audio, communicate copyright information, facilitate audio database indexing and information retrieval without degrading the quality of the original audio signal, or enhance the information content of the audio. It also allows secret communication between two parties over a digital communication link. The proposed algorithm is based on a generalized, reversible, integer transform, which calculates the average and pair-wise differences between the elements of a vector composed from the audio samples. The watermark is embedded into the pair-wise difference coefficients of selected vectors by replacing their least significant bits (LSB) with watermark bits. Most of these coefficients are shifted left by one bit before replacing their LSB. The vectors are carefully selected such that they remain identifiable after embedding and they do not suffer from overflow or underflow after embedding. To ensure reversibility, the locations of the shifted coefficients and the original LSBs are appended to the payload. Simulation results of the algorithm and its performance are presented and discussed in the paper.
Read the Full PaperReversible Techniques
A High-Capacity, Invertible, Data-Hiding Algorithm Using a Generalized, Reversible, Integer TransformRead Paper
A high-capacity, data-hiding algorithm that lets the user restore the original host image after retrieving the hidden data is presented in this paper. The proposed algorithm can be used for watermarking valuable or sensitive images such as original art works or military and medical images. The proposed algorithm is based on a generalized, reversible, integer transform, which calculates the average and pair-wise differences between the elements of a vector extracted from the pixels of the image. The watermark is embedded into a set of carefully selected coefficients by replacing the least significant bit (LSB) of every selected coefficient by a watermark bit. Most of these coefficients are shifted left by one bit before replacing their LSBs. Several conditions are derived and used in selecting the appropriate coefficients to ensure that they remain identifiable after embedding. In addition, the selection of coefficients ensures that the embedding process does not cause any overflow or underflow when the inverse of the transform is computed. To ensure reversibility, the locations of the shifted coefficients and the original LSBs are embedded in the selected coefficients before embedding the desired payload. Simulation results of the algorithm and its performance are also presented and discussed in the paper.
Read the Full PaperReversible transformations may improve the quality of reversible watermarkingRead Paper
We investigate the use of reversible pre-embedding transformations to enhance reversible watermarking schemes for images. We are motivated by the observation that a (non-reversible) sorting transformation dramatically increases the quality of the embedding when combined with a reversible watermark based on a generalized integer transform. In one example we obtain a PSNR gain of 23 dB using the pre-sorting approach over the regular embedding method for the same payload size. This may provide opportunities for increasing the embedding capacity by trading off the quality for a larger payload size. We test several reversible sorting approaches but these do not provide us any gain in the watermarking capacity or quality.
Read the Full PaperReversible Watermark Using Difference Expansion of TripletsRead Paper
A new reversible watermarking algorithm based on the difference expansion of colored images has been developed. Since the watermark is completely reversible, the original image can be recovered exactly. The algorithm uses spatial and spectral triplets of pixels to hide pairs of bits, which allows the algorithm to hide a large amount of data. A spatial triplet is any three pixel values selected from the same spectral component, while a spectral triplet is any three pixel values selected from different spectral components. The algorithm is recursively applied to the rows and columns of the spectral components of the image and across all spectral components to maximize the hiding capacity. Simulation results show that the hiding capacity of the algorithm is very high and the resulting distortion is low.
Read the Full PaperReversible watermarking by difference expansionRead Paper
Reversible watermark has drawn lots of interest recently. Different from other types of digital watermarks, a reversible watermark has a special feature that the original digital content can be completely restored. In this paper we describe a high capacity and high quality reversible watermarking method based on difference expansion. A noticeable difference between our method and others is that we do not need to compress original values of the embedding area. We explore the redundancy in the digital content to achieve reversibility.
Read the Full PaperReversible Watermark Using the Difference Expansion of a Generalized Integer TransformRead Paper
A reversible watermarking algorithm with very high data-hiding capacity has been developed for color images. The algorithm allows the watermarking process to be reversed, which restores the exact original image. The algorithm hides several bits in the difference expansion of vectors of adjacent pixels. The required general reversible integer transform and the necessary conditions to avoid underflow and overflow are derived for any vector of arbitrary length. Also, the potential payload size that can be embedded into a host image is discussed, and a feedback system for controlling this size is developed. In addition, to maximize the amount of data that can be hidden into an image, the embedding algorithm can be applied recursively across the color components. Simulation results using spatial triplets, spatial quads, cross-color triplets, and cross-color quads are presented and compared with the existing reversible watermarking algorithms. These results indicate that the spatial, quad-based algorithm allows for hiding the largest payload at the highest signal-to-noise ratio.
Read the Full PaperWavelet-based reversible watermarking for authenticationRead Paper
In the digital information age, digital content (audio, image, and video) can be easily copied, manipulated, and distributed. Copyright protection and content authentication of digital content has become an urgent problem to content owners and distributors. Digital watermarking has provided a valuable solution to this problem. Based on its application scenario, most digital watermarking methods can be divided into two categories: robust watermarking and fragile watermarking. As a special subset of fragile watermark, reversible watermark (which is also called lossless watermark, invertible watermark, erasable watermark) enables the recovery of the original, unwatermarked content after the watermarked content has been detected to be authentic. Such reversibility to get back unwatermarked content is highly desired in sensitive imagery, such as military data and medical data. In this paper we present a reversible watermarking method based on an integer wavelet transform. We look into the binary representation of each wavelet coefficient and embed an extra bit to expandable wavelet coefficient. The location map of all expanded coefficients will be coded by JBIG2 compression and these coefficient values will be losslessly compressed by arithmetic coding. Besides these two compressed bit streams, an SHA-256 hash of the original image will also be embedded for authentication purpose.
Read the Full PaperCommercial Applications
Automation and workflow considerations for embedding Digimarc Barcodes at scaleRead Paper
The Digimarc® Barcode is a digital watermark applied to packages and variable data labels that carries GS1 standard GTIN-14 data traditionally carried by a 1-D barcode. The Digimarc Barcode can be read with smartphones and imaging-based barcode readers commonly used in grocery and retail environments. Using smartphones, consumers can engage with products and retailers can materially increase the speed of check-out, increasing store margins and providing a better experience for shoppers. Internal testing has shown an average of 53% increase in scanning throughput, enabling 100’s of millions of dollars in cost savings [1] for retailers when deployed at scale. To get to scale, the process of embedding a digital watermark must be automated and integrated within existing workflows. Creating the tools and processes to do so represents a new challenge for the watermarking community. This paper presents a description and an analysis of the workflow implemented by Digimarc to deploy the Digimarc Barcode at scale. An overview of the tools created and lessons learned during the introduction of technology to the market are provided.
Read the Full PaperBridging Printed Media and the Internet via Digimarc's Watermarking TechnologyRead Paper
This paper introduces Digimarc MediaBridge, explains the basics of its digital watermarking technology and shows how it seamlessly transports readers from a paper page to a Web page. Since much of our information still comes from printed media, whether in the form of newspapers, magazines, or packaging, Digimarc MediaBridge marries the best attributes of print with the visual impact and content expansiveness of the Internet. Digimarc MediaBridge works by using the science of steganography to communicate in ways that hide a message within the pixel patterns of an image. It allows a Web camera or scanner to translate the Digimarc MediaBridge image into an instruction to launch an Internet browser and to display the Web site of the author’s choice.
Read the Full PaperDigimarc MediaBridge: the birth of a consumer product from concept to commercial applicationRead Paper
This paper examines the issues encountered in the development and commercial deployment of a system based on digital watermarking technology. The paper provides an overview of the development of digital watermarking technology and the first applications to use the technology. It also looks at how we took the concept of digital watermarking as a communications channel within a digital environment and applied it to the physical print world to produce the Digimarc MediaBridge product. We describe the engineering tradeoffs that were made to balance competing requirements of watermark robustness, image quality, embedding process, detection speed and end user ease of use. Today, the Digimarc MediaBridge product links printed materials to auxiliary information about the content, via the Internet, to provide enhanced informational marketing, promotion, advertising and commerce opportunities.
Read the Full PaperDigital Watermarking Framework - Applications, Parameters, and RequirementsRead Paper
This chapter reviews a framework that includes digital watermark classifications, applications, important algorithm parameters, the requirements for applications in terms of these parameters, and workflow. The goals are twofold: help technology and solution providers design appropriate watermark algorithms and systems and aid potential customers in understanding the applicability of technology and solutions to their markets. Digital watermarks traditionally carry as payload either one or both of the following types of data: local data and persistent identifier that links to a database. Digital watermarking algorithms can also be classified as robust and fragile. Another classification is based on the detection design parameter in watermarking algorithms, whether they are designed to do blind detection or informed detection. Annotation refers to hiding information, usually about the content, in the content. The watermark payload carries a persistent copyright owner identifier that can be linked to information about the content owner and copyright information in a linked database.
Read the Full PaperEngineering Considerations in Commercial WatermarkingRead Paper
This article explores some of the engineering and design considerations for a commercial implementation of watermarking technology. Watermarking technology has many performance characteristics that rival one another in the system design space. After a discussion of these characteristics, we examine the engineering trade-offs required for a particular commercial instantiation of watermarking technology, the Digimarc Media Bridge system. We begin with an overview of the system and its requirements, and then discuss the engineering considerations that led to its successful implementation.
Read the Full PaperFeature-Based Watermark Localization in Digital Capture SystemsRead Paper
The “Internet of Things” is an appealing concept aiming to assign digital identity to both physical and digital everyday objects. One way of achieving this goal is to embed the identity in the object itself by using digital watermarking. In the case of printed physical objects, such as consumer packages, this identity can be later read from a digital image of the watermarked object taken by a camera. In many cases, the object might occupy only a small portion of the the image and an attempt to read the watermark payload from the whole image can lead to unnecessary processing. This paper proposes a statistical learning-based algorithm for localizing watermarked physical objects taken by a digital camera. The algorithm is specifically designed and tested on watermarked consumer packages read by an off-the-shelf barcode imaging scanner. By employing simple noise-sensitive features borrowed from blind image steganalysis and a linear classifier, we are able to estimate probabilities of watermark presence in every part of the image significantly faster than running a watermark detector. These probabilities are used to pinpoint areas that are recommended for further processing. We compare our adaptive approach with a system designed to read watermarks from a set of fixed locations and achieve significant savings in processing time while improving overall detector robustness.
Read the Full PaperUse of web cameras for watermark detectionRead Paper
Many articles covering novel techniques, theoretical studies, attacks, and analyses have been published recently in the field of digital watermarking. In the interest of expanding commercial markets and applications of watermarking, this paper is part of a series of papers from Digimarc on practical issues associated with commercial watermarking applications. In this paper we address several practical issues associated with the use of web cameras for watermark detection. In addition to the obvious issues of resolution and sensitivity, we explore issues related to the tradeoff between gain and integration time to improve sensitivity, and the effects of fixed pattern noise, time variant noise, and lens and Bayer pattern distortions. Furthermore, the ability to control (or at least determine) camera characteristics including white balance, interpolation, and gain have proven to be critical to successful application of watermark readers based on web cameras. These issues and tradeoffs are examined with respect to typical spatial-domain and transform-domain watermarking approaches.
Read the Full PaperOn the use of mobile imaging devices for the validation of first- and second-line security featuresRead Paper
The proliferation of mobile imaging devices combined with Moore's law has yielded a class of devices that are capable of imaging and/or validating many First- and Second-Line security features. Availability of these devices at little or no cost due to economic models and commoditization of constituent technologies will result in a broad infrastructure of devices capable of identifying fraud and counterfeiting. The presence of these devices has the potential to influence aspects of design, production, and usage models for value documents as both a validation tool and as a mechanism for attack. To maximize usability as a validation tool, a better understanding is needed about the imaging capabilities of these devices and which security features and design approaches favor them. As a first step in this direction, the authors investigated using a specific imaging-equipped cell phone as an inspection and validation tool for identity documents. The goal of the investigation was to assess the viability of the device to identify photo swapping, image alteration, data alteration, and counterfeiting of identity documents. To do so security printing techniques such as digital watermarking, microprinting and a Diffractive Optically Variable Image Device were used. Based on analysis of a representative imaging-equipped cell phone (Fujitsu 900i), the authors confirmed that within some geographies, deployed devices are capable of imaging value documents at sufficiently high resolution to enable inspection and validation usage models across a limited set of security features.
Read the Full PaperOptimized selection of benchmark test parameters for image watermark algorithms based on Taguchi methods and corresponding influence on design decisions for real-world applicationsRead Paper
With the growing commercialization of watermarking techniques in various application scenarios it has become increasingly important to quantify the performance of watermarking products. The quantification of relative merits of various products is not only essential in enabling further adoption of the technology by society as a whole, but will also drive the industry to develop testing plans/methodologies to ensure quality and minimize cost (to both vendors & customers.) While the research community understands the theoretical need for a publicly available benchmarking system to quantify performance, there has been less discussion on the practical application of these systems. By providing a standard set of acceptance criteria, benchmarking systems can dramatically increase the quality of a particular watermarking solution, validating the product performances if they are used efficiently and frequently during the design process. In this paper we describe how to leverage specific design of experiments techniques to increase the quality of a watermarking scheme, to be used with the benchmark tools being developed by the Ad-Hoc Watermark Verification Group. A Taguchi Loss Function is proposed for an application and orthogonal arrays used to isolate optimal levels for a multi-factor experimental situation. Finally, the results are generalized to a population of cover works and validated through an exhaustive test.
Read the Full PaperPractical Challenges for Digital Watermarking ApplicationsRead Paper
The field of digital watermarking has recently seen numerous articles covering novel techniques, theoretical studies, attacks and analysis. We focus on practical challenges for digital watermarking applications. Challenges include design considerations, requirements analysis, choice of watermarking techniques, speed, robustness and the tradeoffs involved. We describe common attributes of watermarking systems and discuss the challenges in developing real world applications. We present, as a case study, a hypothetical application that captures important aspects of watermarking systems and illustrates some of the issues faced.
Read the Full PaperSmart Images using Digimarc's watermarking technologyRead Paper
This paper introduces the concept of Smart Images and explains the use of watermarking technology in their implementation. A Smart Image is a digital or physical image that contains a digital watermark, which leads to further information about the image content via the Internet, communicates ownership rights and the procedure for obtaining usage rights, facilitates commerce, or instructs and controls other computer software or hardware. Thus, Smart Images, empowered by digital watermarking technology, act as active agents or catalysts which gracefully bridge both traditional and modern electronic commerce. This paper presents the use of Digimarc Corporation's watermarking technology to implement Smart Images. The paper presents an application that demonstrates how Smart Images facilitate both traditional and electronic commerce. The paper also analyzes the technological challenges to be faced for ubiquitous use of Smart Images.
Read the Full PaperDigital Watermarking Applications in Content Distribution Lessons for Ongoing InnovationRead Paper
Digital watermarking is the process by which digital information -- referred to as a “payload” -- is embedded into all forms of digital media content in a way that is imperceptible to humans, yet persists with the file through format changes and non-linear distribution paths. The process of infusing digital data has little or no impact on the integrity or fidelity of the file. The payload is then detected by a range of reading devices equipped with special software to facilitate the lookup and appropriate response to meet the goals and objectives of a wide range of applications.
Read the Full PaperUsing digital watermarks with image signatures to mitigate the threat of the copy attackRead Paper
In some applications, the utility of an image watermarking system is greatly reduced if an attacker is able to extract a watermark from a marked image and re-embed it into an unmarked image. This threat is known as the copy attack. We develop an image signature scheme to be used with digital watermarks to create an image watermarking system that is more resistant to this attack. We describe the image signature algorithm in detail, and how it may be fused with a digital watermark. We then present preliminary results of our system using an image test set of highly correlated images.
Read the Full PaperWatermarking spot colors in packagingRead Paper
In January 2014, Digimarc announced Digimarc® Barcode for the packaging industry to improve the check-out efficiency and customer experience for retailers. Digimarc Barcode is a machine readable code that carries the same information as a traditional Universal Product Code (UPC) and is introduced by adding a robust digital watermark to the package design. It is imperceptible to the human eye but can be read by a modern barcode scanner at the Point of Sale (POS) station. Compared to a traditional linear barcode, Digimarc Barcode covers the whole package with minimal impact on the graphic design. This significantly improves the Items per Minute (IPM) metric, which retailers use to track the checkout efficiency since it closely relates to their profitability. Increasing IPM by a few percent could lead to potential savings of millions of dollars for retailers, giving them a strong incentive to add the Digimarc Barcode to their packages. Testing performed by Digimarc showed increases in IPM of at least 33% using the Digimarc Barcode, compared to using a traditional barcode. A method of watermarking print ready image data used in the commercial packaging industry is described. A significant proportion of packages are printed using spot colors, therefore spot colors needs to be supported by an embedder for Digimarc Barcode. Digimarc Barcode supports the PANTONE spot color system, which is commonly used in the packaging industry. The Digimarc Barcode embedder allows a user to insert the UPC code in an image while minimizing perceptibility to the Human Visual System (HVS). The Digimarc Barcode is inserted in the printing ink domain, using an Adobe Photoshop plug-in as the last step before printing. Since Photoshop is an industry standard widely used by pre-press shops in the packaging industry, a Digimarc Barcode can be easily inserted and proofed.
Read the Full PaperHuman Visual Systems
Adaptive color watermarkingRead Paper
In digital watermarking, a major aim is to insert the maximum possible watermark signal while minimizing visibility. Many watermarking systems embed data in the luminance channel to ensure watermark survival through operations such as grayscale conversion. For these systems, one method of reducing visibility is for the luminance changes due to the watermark signal to be inserted into the colors least visible to the human visual system, while minimizing the changes in the image hue. In this paper, we develop a system that takes advantage of the low sensitivity of the human visual system to high frequency changes along the yellow-blue axis, to place most of the watermark in the yellow component of the image. We also describe how watermark detection can potentially be enhanced, by using a priori knowledge of this embedding system to intelligently examine possible watermarked images.
Read the Full PaperContent-based Digital Watermarking using Contrast and DirectionalityRead Paper
Human visual system (HVS) models have been used in digital watermarking to minimize the visual effects of the watermark while increasing the strength of watermark. Such work has been applied to different watermarking schemes with varying degrees of success. Previous work at Digimarc resulted in a HVS model that inserts a high watermark signal in busy or high contrast areas, while reducing the watermark on connected directional edges where it becomes more visible. In certain instances, however, this technique inserts a high watermark signal in a region where masking due to the image is insufficient to hide the signal. For example, the watermark becomes apparent in areas with fine texture containing a dominant orientation like hair. This paper introduces a new HVS model, based on techniques that identify areas with a dominant orientation and suppress the watermark gain for those regions. Once a contrast is computed, another measurement (called directionality) is made on a small neighborhood using a standard wavelet filter set and a rotated wavelet filter set to determine if the region is highly oriented in one direction. The watermark strength gets suppressed if the corresponding area has high contrast and high directionality measure, while the gain reaches the maximum when the area has high contrast and low directionality measure. Experiments on problem images show that the proposed technique remedies the limitations of the previous HVS model to some extent, while not degrading the watermark detection performance.
Read the Full PaperFull-Color Visibility Model Using CSF which Varies Spatially with Local LuminanceRead Paper
A full color visibility model has been developed that uses separate contrast sensitivity functions (CSFs) for contrast variations in luminance and chrominance (red-green and blue-yellow) channels. The width of the CSF in each channel is varied spatially depending on the luminance of the local image content. The CSF is adjusted so that more blurring occurs as the luminance of the local region decreases. The difference between the contrast of the blurred original and marked image is measured using a color difference metric. This spatially varying CSF performed better than a fixed CSF in the visibility model, approximating subjective measurements of a set of test color patches ranked by human observers for watermark visibility. The effect of using the CIEDE2000 color difference metric compared to CIEDE1976 (i.e., a Euclidean distance in CIELAB) was also compared.
Read the Full PaperGeometric Enumerated Chrominance Watermark Embed for Spot ColorsRead Paper
Most packaging is printed using spot colors to reduce cost, produce consistent colors, and achieve a wide color gamut on the package. Most watermarking techniques are designed to embed a watermark in cyan, magenta, yellow, and black for printed images or red, green, and blue for displayed digital images. Our method addresses the problem of watermarking spot color images. An image containing two or more spot colors is embedded with a watermark in two of the colors with the maximum signal strength within a user-selectable visibility constraint. The user can embed the maximum watermark signal while meeting the required visibility constraint. The method has been applied to the case of two spot colors and images have been produced that are more than twice as robust to Gaussian noise as a single color image embedded with a luminance-only watermark with the same visibility constraint.
Read the Full PaperMeasurement of CIELAB spatio-chromatic contrast sensitivity in different spatial and chromatic directionsRead Paper
This paper presents data on CIELAB chromatic contrast sensitivity collected in a psychophysical experiment. To complement previously published data in the low-frequency range, we selected five spatial frequencies in the range from 2.4 to 19.1 cycles per degree (cpd). A Gabor stimulus was modulated along six chromatic directions in the a*-b* plane. We also investigated the impact on contrast sensitivity from spatial orientations – both vertically and diagonally oriented stimuli were used. The analysis of the collected data showed lowest contrast sensitivity in the chromatic direction of around 120° from the positive a*-axis. The contrast sensitivity in the diagonal spatial orientation is slightly lower when compared to the vertical orientation.
Read the Full PaperSelecting Best Ink Color for Sparse WatermarkRead Paper
A method of watermarking print ready image data used in the commercial packaging industry is described. A significant proportion of packages are printed using spot colors, therefore various tools to support watermarking spot colors are required. Previously we have described a method which assumes that the package design contains process colors CMY as well as the spot colors to insert a chrominance watermark [1]. Some package designs do not include the process colors CMY or are being printed with a print process which allows limited overprinting, and require a different approach. For simplicity of press control, a binary watermarking system was developed. The binary watermark is inserted in a single ink color with values of 0 and 100%. For the binary watermark to be used in a package design, a method is required which evaluates the ink colors used in a package design and ranks them for use with a modern barcode scanner at the Point of Sale (POS) station.
Read the Full PaperUsing Watermark Visibility Measurements To Select An Optimized Pair Of Spot Colors For Use In A Binary WatermarkRead Paper
Spot colors are widely used in the food packaging industry. We wish to add a watermark signal within a spot color that is readable by a Point Of Sale (POS) barcode scanner which typically has red illumination. Some spot colors such as blue, black and green reflect very little red light and are difficult to modulate with a watermark at low visibility to a human observer. The visibility measurements that have been made with the Digimarc watermark enables the selection of a complementary color to the base color which can be detected by a POS barcode scanner but is imperceptible at normal viewing distance.
Read the Full PaperWatermarking spot colors in packagingRead Paper
In January 2014, Digimarc announced Digimarc® Barcode for the packaging industry to improve the check-out efficiency and customer experience for retailers. Digimarc Barcode is a machine readable code that carries the same information as a traditional Universal Product Code (UPC) and is introduced by adding a robust digital watermark to the package design. It is imperceptible to the human eye but can be read by a modern barcode scanner at the Point of Sale (POS) station. Compared to a traditional linear barcode, Digimarc Barcode covers the whole package with minimal impact on the graphic design. This significantly improves the Items per Minute (IPM) metric, which retailers use to track the checkout efficiency since it closely relates to their profitability. Increasing IPM by a few percent could lead to potential savings of millions of dollars for retailers, giving them a strong incentive to add the Digimarc Barcode to their packages. Testing performed by Digimarc showed increases in IPM of at least 33% using the Digimarc Barcode, compared to using a traditional barcode. A method of watermarking print ready image data used in the commercial packaging industry is described. A significant proportion of packages are printed using spot colors, therefore spot colors needs to be supported by an embedder for Digimarc Barcode. Digimarc Barcode supports the PANTONE spot color system, which is commonly used in the packaging industry. The Digimarc Barcode embedder allows a user to insert the UPC code in an image while minimizing perceptibility to the Human Visual System (HVS). The Digimarc Barcode is inserted in the printing ink domain, using an Adobe Photoshop plug-in as the last step before printing. Since Photoshop is an industry standard widely used by pre-press shops in the packaging industry, a Digimarc Barcode can be easily inserted and proofed.
Read the Full PaperDigital Watermarking Using Improved Human Visual System ModelRead Paper
In digital watermarking, one aim is to insert the maximum possible watermark signal without significantly affecting image quality. Advantage can be taken of the masking effect of the eye to increase the signal strength in busy or high contrast image areas. The application of such a human visual system model to watermarking has been proposed by several authors. However if a simple contrast measurement is used, an objectionable ringing effect may become visible on connected directional edges. In this paper we describe a method which distinguishes between connected directional edges and high frequency textured areas, which have no preferred edge direction. The watermark gain on connected directional edges is suppressed, while the gain in high contrast textures is increased. Overall, such a procedure accommodates a more robust watermark for the same level of visual degradation because the watermark is attenuated where it is truly objectionable, and enhanced where it is not. Furthermore, some authors propose that the magnitude of a signal which can be imperceptibly placed in the presence of a reference signal can be described by a non-linear mapping of magnitude to local contrast. In this paper we derive a mapping function experimentally by determining the point of just noticeable difference between a reference image and a reference image with watermark.
Read the Full PaperFragile Techniques
Wavelet-based image compression and content authenticationRead Paper
In the digital information age, digital content (audio, image, and video) can be easily copied, manipulated, and distributed. Copyright protection and content authentication of digital content has become an urgent problem to content owners and distributors. Digital watermarking has provided a valid solution to this problem. Based on its application scenario, most digital watermarking methods can be divided into two categories: robust watermarking and fragile watermarking. Here, we will concentrate on fragile watermarking of digital images, which is for image content authentication. Our fragile watermarking method is heavily based on the new image compression standard JPEG 2000. We choose a compressed bit stream from JPEG 2000 as the hash of an image and embed the hash back to the image. The exceptional compression performance of JPEG 2000 solves the tradeoff between small hash size and high hash confidence level. In the authentication stage, the embedded compressed bit stream will be extracted. Then it will be compared with the compressed bit stream of the image to be authenticated. The authentication decision comes from the comparison result. Besides content authentication, we will also show how to employ this watermarking method for hiding one image into another.
Read the Full PaperIoT & WoT
A Resource Oriented Architecture for the Web of ThingsRead Paper
Many efforts are centered around creating largescale networks of “smart things” found in the physical world (e.g., wireless sensor and actuator networks, embedded devices, tagged objects). Rather than exposing real-world data and functionality through proprietary and tightly-coupled systems, we propose to make them an integral part of the Web. As a result, smart things become easier to build upon. Popular Web languages (e.g., HTML, Python, JavaScript, PHP) can be used to easily build applications involving smart things and users can leverage wellknown Web mechanisms (e.g., browsing, searching, bookmarking, caching, linking) to interact and share these devices. In this paper, we begin by describing the Web of Things architecture and best-practices based on the RESTful principles that have already contributed to the popular success, scalability, and modularity of the traditional Web. We then discuss several prototypes designed in accordance with these principles to connect environmental sensor nodes and an energy monitoring system to the World Wide Web. We finally show how Web-enabled smart things can be used in lightweight ad-hoc applications called “physical mashups”.
Read the Full PaperA Smart Tags Driven Service Platform for Enabling Ecosystems of Connected ObjectsRead Paper
Internet of Things (IoT) is about connecting objects, things and devices and combining them with a set of novel services. IoT market is unstoppably progressing, introducing a lot of changes across industries, both from the technological and business perspectives. Optimization of the whole value chain is providing many opportunities for improvements leveraging IoT technologies, in particular if information about the products is available and shareable. TagItSmart project is creating an open, interoperable set of components that can be integrated into any cloud-based platform to address the challenges related to the lifecycle management of new innovative services. TagItSmart is a three years project (2016–2018), consisting of 15 consortium partners from Europe. The project is funded under the Horizon 2020 program. A main target of the TagItSmart are everyday mass-market objects not normally considered as part of an IoT ecosystem. These new smarter objects will dynamically change their status in response to a variety of factors and be seamlessly tracked during their lifecycle. This will change the way usersto-things interactions are viewed. Combining the power of functional inks with the pervasiveness of digital and electronic markers, a huge number of objects will be equipped with cheap sensing capabilities thus being able to capture new contextual information. Beside this, the ubiquitous presence of smartphones with their cameras and NFC readers will create the perfect bridge between everyday users and their objects. This will create a completely new flow of crowdsourced information that can be exploited by new services.
Read the Full PaperCloud Computing, REST and Mashups to Simplify RFID Application Development and DeploymentRead Paper
While of increasing importance for the real-time enterprise, deployments of Internet of Things infrastructures such as RFID remain complex and expensive. In this paper, we illustrate these challenges by studying the applications of the EPC Network which is an RFID standards framework that aims to facilitate interoperability and application development. We show how the use of blueprints that were successful on the Web can help to make the adoption of these standards less complex. We discuss in particular how Cloud Computing, RESTful interfaces, Real-time Web (Websockets and Comet) and Web 2.0 Mashups can simplify application development, deployments and maintenance in a common RFID application. Our analysis also illustrates that RFID/EPC Network applications are an excellent playground for Web of Things technologies and that further research in this field can significantly contribute to making real-world applications less complex and cost-intensive.
Read the Full PaperCyber–Physical–Social Frameworks for Urban Big Data Systems: A SurveyRead Paper
The integration of things’ data on the Web and Web linking for things’ description and discovery is leading the way towards smart Cyber–Physical Systems (CPS). The data generated in CPS represents observations gathered by sensor devices about the ambient environment that can be manipulated by computational processes of the cyber world. Alongside this, the growing use of social networks offers near real-time citizen sensing capabilities as a complementary information source. The resulting Cyber–Physical–Social System (CPSS) can help to understand the real world and provide proactive services to users. The nature of CPSS data brings new requirements and challenges to different stages of data manipulation, including identification of data sources, processing and fusion of different types and scales of data. To gain an understanding of the existing methods and techniques which can be useful for a data-oriented CPSS implementation, this paper presents a survey of the existing research and commercial solutions. We define a conceptual framework for a data-oriented CPSS and detail the various solutions for building human–machine intelligence.
Read the Full PaperGiving RFID a REST: Building a Web-Enabled EPCISRead Paper
The Electronic Product Code Information Service (EPCIS) is a standard which defines interfaces enabling RFID events to be captured and queried. The query interface, implemented with WS-* Web services, enables business applications to consume and share data within and across companies, to form a global network of independent EPCIS instances. However, the interface limits the application space to the rather powerful platforms which understand WS-* Web services. In this paper we propose seamlessly integrating this network into the Web by designing a RESTful (REpresentational State Transfer) architecture for the EPCIS. Using this approach, each query, tagged object, location or RFID reader gets a unique URL that can be linked to, exchanged in emails, browsed for, bookmarked, etc. Additionally, this paradigm shift allows Web languages like HTML and JavaScript to directly use RFID data to fast-prototype light-weight applications such as mobile applications or Web mashups. We illustrate these benefits with a JavaScript mashup platform that integrates several services on the Web (e.g., Twitter, Wikipedia, etc.) with RFID data to allow managers along the supply chain and customers to get comprehensive data about their products.
Read the Full PaperDigimarc Discover on Google GlassRead Paper
This paper reports on the implementation of the Digimarc Discover Platform on Google Glass, enabling the reading of a watermark embedded in a printed material or audio. The embedded watermark typically contains a unique code that identifies the containing media or object and a synchronization signal that allows the watermark to be read robustly. The Digimarc Discover smartphone application can read the watermark from a small portion of printed image presented at any orientation or reasonable distance. Likewise, Discover can read the recently introduced Digimarc Barcode to identify and manage consumer packaged goods in the retail channel. The Digimarc Barcode has several advantages over the traditional barcode and is expected to save the retail industry millions of dollars when deployed at scale. Discover can also read an audio watermark from ambient audio captured using a microphone. The Digimarc Discover platform has been widely deployed on the iPad, iPhone and many Android-based devices, but it has not yet been implemented on a head-worn wearable device, such as Google Glass. Implementing Discover on Google Glass is a challenging task due to the current hardware and software limitations of the device. This paper identifies the challenges encountered in porting Discover to the Google Glass and reports on the solutions created to deliver a prototype implementation.
Read the Full PaperA Web of Things Application Architecture - Integrating the Real-World into the WebRead Paper
A central concern in the area of pervasive computing has been the integration of digital artifacts with the physical world and vice-versa. Recent developments in the field of embedded devices have led to smart things increasingly populating our daily life. We define smart things as digitally enhanced physical objects and devices that have communication capabilities. Application domains are for instance wireless sensor and actuator networks in cities making them more context-aware and thus smarter. New appliances such as smart TVs, alarm clocks, fridges or digital-picture frames make our living-rooms and houses more energy efficient and our lives easier. Industries benefit from increasingly more intelligent machines and robots. Usual objects tagged with radio-tags or barcodes become linked to virtual information sources and offer new business opportunities. As a consequence, Internet of Things research is exploring ways to connect smart things together and build upon these networks. To facilitate these connections, research and industry have come up over the last few years with a number of low-power network protocols. However, while getting increasingly more connected, embedded devices still form multiple, small, incompatible islands at the application layer: developing applications using them is a challenging task that requires expert knowledge of each platform. As a consequence, smart things remain hard to integrate into composite applications. To remedy this fact, several service platforms proposing an integration architecture appeared in recent years. While some of them are successfully implemented on some appliances and machines, they are, for the most part, not compatible with one another. Furthermore, their complexity and lack of well-known tools let them only reach a relatively small community of expert developers and hence their usage in applications has been rather limited. On the other hand, the Internet is a compelling example of a scalable global network of computers that interoperate across heterogeneous hardware and software platforms. On top of the Internet, the Web illustrates well how a set of relatively simple and open standards can be used to build very flexible systems while preserving efficiency and scalability. The cross-integration and developments of composite applications on the Web, alongside with its ubiquitous availability across a broad range of devices (e.g., desktops, laptops, mobile phones, set-top boxes, gaming devices, etc.), make the Web an outstanding candidate for a universal integration platform. Web sites do not offer only pages anymore, but Application Programming Interfaces that can be used by other Web resources to create new, ad-hoc and composite applications running in the computing cloud and being accessed by desktops or mobile computers. In this thesis we use the Web and its emerging technologies as the basis of a smart things application integration platform. In particular, we propose a Web of Things application architecture offering four layers that simplify the development of applications involving smart things. First, we address device accessibility and propose implementing, on smart things, the architectural principles that are at the heart of the Web such the Representational State Transfer (REST). We extend the REST architecture by proposing and implementing a number of improvements to fit the special requirements of the physical world such as the need for domain-specific proxies or real-time communication. In the second layer we study findability : In a Web populated by billions of smart things, how can we identify the devices we can interact with, the devices that provide the right service for our application? To address these issues we propose a lightweight metadata format that search engines can understand, together with a Web-oriented discovery and lookup infrastructure that leverages the particular context of smart things. While the Web of Things fosters a rather open network of physical objects, it is very unlikely that in the future access to smart things will be open to anyone. In the third layer we propose a sharing infrastructure that leverages social graphs encapsulated by social networks. We demonstrate how this helps sharing smart things in a straightforward, user-friendly and personal manner, building a Social Web of Things. Our primary goal in bringing smart things to the Web is to facilitate their integration into composite applications. Just as Web developers and tech-savvies create Web 2.0 mashups (i.e., lightweight, ad-hoc compositions of several services on the Web), they should be able to create applications involving smart things with similar ease. Thus, in the composition layer we introduce the physical mashups and propose a software platform, built as an extension of an open-source workflow engine, that offers basic constructs which can be used to build mashup editors for the Web of Things. Finally, to test our architecture and the proposed tools, we apply them to two types of smart things. First we look at wireless sensor networks, in particular at energy and environmental monitoring sensor nodes. We evaluate the benefits of applying the proposed architecture first empirically by means of several prototypes, then quantitatively by running performance evaluations and finally qualitatively with the help several developers who used our frameworks to develop mobile and Web-based applications. Then, to better understand and evaluate how the Web of Things architecture can facilitate the development of real-world aware business applications, we study automatic identification systems and propose a framework for bringing RFID data to the Web and global RFID information systems to the cloud. We evaluate the performance of this framework and illustrate its benefits with several prototypes. Put together, these contributions materialize into an ecosystem of building-blocks for the Web of Things: a world-wide and interoperable network of smart things on which applications can be easily built, one step closer to bridging the gap between the virtual and physical worlds.
Read the Full PaperTowards the Web of Things: Web Mashups for Embedded DevicesRead Paper
In the “Internet of Things” the physical world becomes integrable with computer networks. Embedded computers or visual markers on everyday objects allows things and information about them to be accessible by software in the virtual world. However, this integration is based on competing standards or hacks and thus requires technical expertise and is time consuming. Following the long tail of Web 2.0 mashups applications, we propose a similar approach for integrating real-world devices to the web, allowing for them to be easily combined with other virtual and physical resources. In this paper we discuss possible integration approaches, in particular how we apply the REST principles to wireless sensor networks and smart objects. We further describe two concrete implementations: on the Sun SPOT platform and on the Ploggs wireless energy monitors. Finally, we demonstrate how these two implementations can used to quickly create new prototypes in a mashup manner.
Read the Full PaperMobile
Assessment of Camera Phone Distortion and Implications for WatermarkingRead Paper
The paper presents a watermark robustness model based on the mobile phone camera's spatial frequency response and watermark embedding parameters such as density and strength. A new watermark robustness metric based on spatial frequency response is defined. The robustness metric is computed by measuring the area under the spatial frequency response for the range of frequencies covered by the watermark synchronization signal while excluding the interference due to aliasing. By measuring the distortion introduced by a particular camera, the impact on watermark detection can be understood and quantified without having to conduct large-scale experiments. This in turn can provide feedback on adjusting the watermark embedding parameters and finding the right trade-off between watermark visibility and robustness to distortion. In addition, new devices can be quickly qualified for their use in smart image applications. The iPhone 3G, iPhone 3GS, and iPhone 4 camera phones are used as examples in this paper to verify the behavior of the watermark robustness model.
Read the Full PaperEvolution of Middleware to Support Mobile DiscoveryRead Paper
An emerging class of “Mobile Discovery” applications uses the camera and microphones on mobile devices to enable the recognition and identification of media and physical objects. Most of these applications are being commercialized in support of specific usage scenarios for Symbian, Android, and iOS platforms. A unique application that is targeting a broader range of usage scenarios is Digimarc’s Discover application. It simultaneously recognizes printed barcodes, digital watermarks, and audio fingerprints. A critical component of this application is optimizing the use of resources on the mobile device while ensuring the delivery of a positive user experience. This paper proposes a middleware for this purpose and discusses various techniques by which this is achieved. These techniques include employing thread management strategies appropriate for classes of sensors, utilizing logical sensors in the camera pipeline, and optimizing the underlying recognition technologies for the mobile platform.
Read the Full Paper