IDG Contributor Network: AI goes to the movies to identify content you will want to watch


The digital technology and media and entertainment industries are beginning to come together to solve a common problem—how to extract, unlock, harness and make better use of the massive amounts of video content and data they produce. 

Anyone who has ever appeared in or produced a movie, commercial or business video is aware that many times a good portion of the footage winds up on the proverbial cutting room floor. The same is true of digital data. It is estimated that more than 80 percent of enterprise data is considered “dark”—created but never used.

+ Also on Network World:  +

What if there were a way to pick up this data equivalent of cutting room discards and turn them into new assets? What if technologies such as IBM Watson could work with video editors to unearth data treasures that benefit and excite both producers and audiences? Sounds like the plot to a blockbuster movie, but it is now more fact than fiction.

Cloud-based cognitive solutions and AI technologies at work

Digital technology companies are now busy developing cloud-based cognitive solutions to help uncover new data and insights about video content and its viewers. New AI technologies can help media companies identify meaningful content in video. Using deep learning, these same companies can identify viewers who will want to watch newly created derivative works based on this content. These solutions also can be used by companies in other industries that depend on video to communicate with employees, partners or customers.

. This system was trained to “watch” and “hear” broadcast videos of the golf tournament in real time, accurately identifying the start and end frames of exciting moments using commentator tone, players celebrating, high fives and other indicators. IBM created a dashboard that showed the latest highlight and the excitement level to prove the ability generate highlights in real time.

IBM Watson analyzes video

This week at the , IBM also is showcasing a new cloud-based service that will use Watson’s cognitive capabilities to provide a deep understanding of video that generates new metadata identifying keywords, concepts, visual imagery, tone and emotional context. Media companies will be able to use the detailed data to better match content and advertising with viewer interests.

The combination of using cognitive extraction technologies to understand complex video content and Watson learning methods to identify viewer preferences will be an industry breakthrough. Building a deep semantic understanding of the video provides a richer analysis of what’s in a video. Using Watson to analyze a combination of viewer behaviors—what people are watching, how long they are watching and what they are saying on social media—provides a deeper understanding of what they may want to watch next or to help inform advertising or marketing teams. The two technologies together form a very powerful combination.