The Evolution of Visual Social Media

Two years ago, users were sharing 1.8 million images every day on social media. With 4 billion images now being shared daily, visual listening or “image intelligence” is fast becoming one of the hottest technologies in social media. Of the images containing a brand, 85% of them do not mention the brand in text, which means that if you are only analyzing on the basis of text, you are missing a huge swath of insight. This panel discussed how their respective organizations are pulling intelligence from images that can be used in predictive ways for business and research and how they see it evolving in the future.

Glen Szczypka, Principal Research Scientist for NORC at The University of Chicago, talked about using image recognition to study how media messaging and images affect public health concerns. With regard to tobacco use, they are looking at what images tobacco companies are using to attract customers; on the health side, they are working with organizations like the Truth Initiative to see what messages are effective in helping people stop smoking. He noted that it is particularly important to explore Twitter, Instagram, and other platforms where young adults are posting text and images related to smoking.

Visual 1

Ethan Goodman, Senior Vice President of Shopper Experience at The Mars Agency, helps Fortune 500 companies plan marketing activities with large retailers. For example, they are helping Campbell’s to sell more soup at Walmart, Kroger, and other stores. They use image recognition to get into the brains of customers and figure out how to market more effectively to them. Ethan is primarily focused on their customer’s clients in photos, as well as finding competitors’ logos. As they have evolved, they started combining image analysis with sentiment and object analysis. He noted that their agency is a customer of, and investor in, Ditto Labs.

Glen’s organization retrieves images from Instagram based on the basis of tags, such as #blunt when used as slang for inexpensive cigars. The tags both limit the data set to what is allowed by Instagram and also help to find objects, blunts, that can’t be recognized as easily as logos. They then used Ditto to recognize patterns in the image pixels to find logos (over 40% of the images they retrieve contain branded content). When analyzing which brands are featured in images tagged with #blunt, Swisher Sweets was by far the most common. This led Glen’s firm to look into whether the content was organic content or whether Swisher Sweets is encouraging people to post pictures of their blunts.

Ethan’s firm, by contrast, retrieves images without any text attributes. Looking through the vast stream of photos containing logos, they were interested and shocked by what they found–everything from traditional use cases, such as images with Campbell’s soup showing people making dinner to images related to a Vodka brand where people were putting Skittles into a Vodka bottle to make a lava lamp. Ethan then uses the technology to glean insight that can drive a creative idea or marketing strategy. He explained that it’s important to know what other brands are in a targeted brand’s advocacy set. He noted that in the near year, clients will be increasingly using this strategy to inform advertising decisions and ad targeting. You will be able to target someone who has been pictured with a particular logo a large number of times with a display ad for that brand.

As social media becomes increasingly visual, users are going to see more applications that feature buttons that allow them to learn or buy without having to leave an application. For example, a user will be able to click on a pair of glasses in an image, find retailers offering those glasses, and then even add them to their cart.

When asked how they train machines to find images, Glen explained that they train machines using an iterative process where a human will look at an image or text and identify it as, for example, “smoking” or “blunt.” After the machine finds images, they are compared with what the human found. If a bit off, they train the machine a bit more.

In terms of advice for companies wanting to start using image recognition, Ethan recommended that you start now. He noted that it is low risk to start. Think about how visual listening fits into your larger strategy, how it can complement your text listening tool. Glen noted that if you need to limit the data set, choose the tags you’re searching on carefully. For example, don’t use your brand as a keyword.

The panel also discussed ecosystem challenges related to the differences across platforms for data access. For example, users can generally pull images through APIs; however there are limits. A user can only access publicly-available images on Facebook. Instagram recently changed their terms so that a development license is required to use their API. Other social media platforms simply don’t allow users to access their images. Instead, one might not be able to pull 100% of what is available, but rather retrieve a solid representative sample.

You should also set expectations about not only what is available, but how long it takes to retrieve the data for analysis. Ethan mentioned that it takes between 3-7 days to train a machine to precisely identify a logo. Definitely something to keep in mind when thinking about your future image recognition efforts.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s