Facebook Twitter LinkedIn
November 21, 2011
TV Tech Check

Broadcasters Take on the "Second Screen"

A recent announcement by a group of broadcasters has once again brought attention to the subject of the so-called "Second Screen." Broadcasters are quickly becoming familiar with this new term, but for those who may not yet be fully conversant with the subject or the technologies behind it, this week's TechCheck presents an overview of the latest thinking about it.

Second Screen Explained
"Second Screen" refers to the increasingly popular behavior among television viewers to simultaneously use a personal computing device while they are watching a TV program, for purposes that are somehow related to the TV program. While this activity has been in evidence for a number of years (initially via laptop PCs), the more recent proliferation of smartphones and tablets has stimulated a marked increase in the practice among consumers, to the point where the term "Second Screen" is now becoming a widely used term of art in media circles.

Most activity on the second screen device to date has been strictly user-initiated. This includes web searches for related content or postings on the popular social media sites. More recently, some TV-specific social media sites have sprung up (so-called "Social TV" services, as described in the July 25, 2011 edition of TV TechCheck).

The latest wrinkle adopted by broadcasters in their recent announcement allows a TV channel to direct the user's second-screen experience, by sending the second screen to a corresponding website that presents companion content to the current program. The content can be presented dynamically and synchronously, meaning that as the TV program proceeds, the second-screen content also updates accordingly. The process works whether the program is viewed live as broadcast, or from a user's local storage (e.g., DVR).

Enabling Technology
There are numerous technologies that can enable such coordination between devices. One popular technique is generically referred to as Automatic Content Recognition (ACR). ACR systems today typically utilize one or both of two basic processes, called Watermarking and Fingerprinting.

Watermarking inserts a frequent or continuous identifying signal into the audio or video of the program content. The watermark signal is imperceptible to human users, but recognizable by target devices. The signal must be adequately robust to pass through downstream processing, recording and playback, and must remain recognizable over noise and distortions that may occur in the end-user's environment. (Familiar examples of watermarking are the technologies used by today's electronic audience measurement systems, where each broadcast service includes a unique identifying watermark signal that is recognized by the reporting device.)

On the other hand, Fingerprinting does not require the insertion of an identifying signal. Instead, the end-user's device includes a client that examines the audio or video of a particular piece of content for specific characteristics inherent to it, and then compares the result to an online library of similar analyses. If a match is found, the content is thereby identified to the user device. (Familiar applications of this technique are the music identification apps that have become popular on smartphones, whereby a user holds the smartphone near a speaker playing music for a few seconds, and the app returns title and artist information.)

Although both processes can be used in tandem, certain applications may be better served by one or the other approach. For example, Fingerprinting may be more applicable when a particular piece of content requires similar identification across a range of distribution venues, whereas Watermarking may be preferred to identify a particular instance of that content, such as when it is broadcast over a channel that must be discretely identified (especially when that channel is one of many local affiliates of a national network, many of which may be broadcasting their own instance of that same content at the same time). Further, because Watermarking is frequently or continuously inserted in the content, it may be more appropriate for signaling dynamic associations, such as where time synchronization of a second stream of content is required. In contrast, Fingerprinting is a more static identifier of an entire program (or at least a particular program segment).

Broadcasters Take Control
The method chosen by this new alliance of local broadcasters appears to use audio watermarking for ACR. Users will download a free app to their second-screen device(s) called ConnecTV. The app uses the device's microphone to listen to a TV program's soundtrack (via simple acoustical monitoring in the viewing room), and extracts the watermark signal. Using the watermark to identify the content and station the app can then navigate to the appropriate online location where second-screen content is stored, and synchronize its display to the part of the program that is currently being viewed.



The second screen content can include synchronized ancillary content (e.g., sports-player stats, drama-character backstories, links to further information), interactive elements (e.g., play-along games, polls, e-commerce), and brand-extension or promotional content for the station, program or advertiser. (Material related to a station's local ad inventory can be included in the second-screen content.) The app also includes a social media platform allowing viewers to interact with one another, and a recommendation engine offering promotional opportunities for similar content.

ConnecTV is already in use for national channels and programs, but the station group announcement marks its first foray into local TV channel application. The service is currently in a closed beta, and is scheduled for open launch in early 2012. For more about the ConnecTV partnership, and some examples of the app in use, look here.

NAB Accepting Nominations for 2012
NAB Engineering Achievement Awards

NAB is currently accepting nominations for the 2012 NAB Engineering Achievement Awards. Established in 1959, the NAB Engineering Achievement Awards are presented each year to individuals for their outstanding accomplishments in the broadcast industry. In 1991, NAB began giving awards separately for achievements in radio and television. The award winners will be recognized at the Technology Luncheon at 2012 NAB Show on April 18 in Las Vegas, Nev.

Additional information and a nomination form are available on NAB’s website. The deadline for nominations is January 23, 2012.






The November 21, 2011 TV TechCheck is also available in an Adobe Acrobat file. Please click here to read the Adobe Acrobat version of TV TechCheck.

TV TechCheck will not be published on November 28, 2011
but will return on December 5, 2011.


ADVERTISEMENTS