0

Thoughts from Big Boulder

Big Boulder 2016 is a wrap! We are grateful to the speakers, panelists and moderators for sharing their ideas and insights. And we are, of course, grateful to all of you who attended and made this not just a conference, but a community.

Our Big Boulder 2016 emcees, Mark Josephson, CEO of Bitly, and Farida Vis, Director of the Visual Social Media Lab at Sheffield University, shared several overarching themes that reverberated throughout this year’s event:

  • Big Boulder reveals what we don’t know, what we should know, and what we could know.
  • The evolving role of platformsfrom machine learning to algorithms to bots—is fundamentally changing how we think about ourselves as human beings.
  • “It’s the power of us,” said Mark. The biggest problems in the world get solved by networks, not by one person sitting alone in a room. And everyone at Big Boulder is helping to move our industry forward.

“The conference is about what happens on stage and perhaps even more, what happens across the community of attendees off-stage,” said Chris Moody, VP of Data Strategy at Twitter and Chairman of the Board of Big Boulder Initiative. “This room has high concentration of a group of people that can change an industry!”

You can view of all the blog coverage from Day 2 of Big Boulder 2016 below:

As we close the fifth year of Big Boulder, we plan to extend the insights and energy at this year’s event into opportunities to connect year-round.

We invite you to get involved in our Slack community and consider joining BBI as a member of our industry organization.

Thank you for an incredible Big Boulder 2016!

 

0

The Future of Bots

This session began by listening to a fascinating conversation between Sam Mandel, the Operating Partner at betaworks, and Chris Messina, the Developer Experience Lead at Uber. The VP and GM Klout and Consumer Data at Klout, Tyler Singletary, moderated this interaction. So how are things really developing in this space? More has changed within the bot world in the last 6 months than in the past 10 years. As process and growth has ramped so up quickly, it only made sense that the last panel of Big Boulder 2016 would be looking into the future of bots and their impact on human interaction.

Bots 3

In the existence of bots, they have had a “science-fiction fascination” behind them which has people interested; it’s now sexy to talk about bots. Messina commented that bots are exciting because there is a shift now of making computing more accessible to a wider range of people. Which makes sense, seeing how the number of consumers on the web only grows every day. And not only on desktops. Messina stated that Mark Zuckerberg estimates that the next generation of web users to come online will primarily be using phones and other mobile devices.

This all brings up several question for the user. Should bots be inside messenger apps at this moment? Should you build it and they will come? Or will the creators and implementers of bots have to also acclimate the users?

Users’ expectations guide these decisions as well. As Messina commented to Mandel, his experience with Poncho, an app that delivers customizable weather forecasts, is that the program does not learn as fast as he’d like it to. Which is to be expected, as there is much “grey area” that is encountered in this new industry, as Mandel puts it.

Sam Mandel went on to explain that the majority of consumers do not fully understand the world of bots or how to use them properly. The act of stepping back, creating simpler processes, and having more direct training is crucial in moving forward in the field. To put it simply, it’s different talking to a bot than it is talking to a person. Bots are just less capable. However, it should be noted that, since most people are now trained in executing Google searches, the Google search engine has now improved in understanding specific and customized requests. The “one-size-fits-all” approach doesn’t cut it anymore now that the dynamic is shifting.

Looking ahead, the future of bots lies with the constant change that the world has experienced in the past 10 years. There will always be room for improvement, for better connections between humans and applications. As users move to a “much more diffuse world,” the tech will be more and more align with how humans interact in the world today. The potential is there, and one can only wait in anticipation to see where it leads them.

0

Messaging in the World of Bots

When moderator Chris Moody, VP of Data Strategy at Twitter, polled the Big Boulder audience, about 15% of the group had used Kik, a mobile messaging app. Michael Roberts, Head of Chat at Kik, says that 15% is a good show of hands in an audience like Big Boulder. That’s because Kik is used mostly by teens in the United States. In fact, of the 300 million registered Kik users, 40% are U.S. teens. Michael said that high usage is because teens are the first generation of digital- and mobile-native users and naturally know how to interact with bots.

Kik

Anonymity vs. Pseudonymity

Kik empowers users to form an identity inside the app. Rather than anonymity, which is meant to strip identity from a user, pseudonymity allows users to form any identity they choose. The average Kik user spends 87 minutes inside the app every dayan amount of time that allows users ample time to develop a persona and express themselves.

Bots are Hot

“Right now there are a lot of trends coming together at the same time, including NLP, AI, and machine learning,” Michael explained. Bots are hot because they aren’t only about those trends, but instead are about trying trends together seamlessly inside of messaging apps.

The popularity of bots makes sense also because it’s easy to reach a digital-native audience. Bots are an interface that digital natives already understand. There’s no learning curve, which allows companies to reach these users easily.

Bots are hot also because they let companies build inside of apps that are already on mobile phones, providing a huge opportunity to reach customers. Bots are another tool in a mobile developer’s toolkit. “The future isn’t putting bots in products,” said Michael. “The future is building better products.”

Beyond One-to-One Conversations

Bots aren’t just a one-to-one conversations between a user and a bot. Far from it. For example, using the “@mention” bot in an app, like the one that exists in Kik, allows a user to pull a tic-tac-toe game into a conversation between friends. It’s not about talking with bots directlyit’s about adding bots into existing conversations.

Games are another opportunity for bots to engage users in a broad way. According to Michael, 40% of all app content in the app store is a game, and 80% of revenue on mobile is through games. Games provide a tremendous platform for bots to move beyond one-to-one conversations and to be a seamless, natural part of the gaming experience.

Retailers with brick and mortar stores can bridge the gap between physical space and digital community using bots. Companies such as Sephora and H&M are using bots to connect with users, even when those users aren’t at a store.

Privacy, Control and Bots

As surfaced in other panels during Big Boulder, bots raise the question of how much messaging to a user is too much. Michael described that at Kik and in Kik’s Bot Shop, a balance of retaining a user by notifying them about activities, but not overly spamming users with too many bots is a careful balance.

Kik also thinks about users in terms of trust, intimacy, and control. The company tailors experiences for users so that users retain privacy and safety.

Measuring Bot Success

Bot messaging is unlike any other app platform, according to Michael. Common app metricslike MAU (Monthly Average Users) or quantity of app downloadsaren’t relevant to bots. Instead, Kik uses chat sessions as a better metric to measure success. Chat sessions reveal how long a user is inside a chat, how active the conversation is, and what other bots users bring into the messaging platform.

The future of bots is likely to feature bots not just as a single, siloed tool but instead as a platform-agnostic way to engage users.

 

0

Building Digital Analytics Capabilities in a B2B World

The final Pecha Kucha talk of the conference was delivered by Chuck Hermann, Director of Digital Analytics at Intel, who gave the audience a rundown of what it’s really like to build a Digital Analytics platform and team for a Fortune 50 company. In 2014, Hemann was asked to join the Intel team to begin a new department and transition the company from the B2C to the B2B world. Hemann covered the life cycle of this project over the course of the last two years, addressing three specific focus areas for those who may be undertaking similar projects in the future:

  • “Where did we start and what did we learn?” Two years ago, not only was there no Digital Analytics team to speak of at Intel, but their digital measurement framework was also extremely elementary for what they were trying to accomplish: one based on clicks, but not on attitudes. Internal reports were rarely consumed–most people weren’t even aware they existed. When Hemann arrived and began his work, he immediately hired senior-level talent with a wide range of skills to hit the ground running. Even with a high-caliber team with plenty of experience, he noted that changing a measurement framework is at least a 6-9 month process, and that’s if it’s all done well, start to finish. Another note of importance: what is effective in the B2C space is not always effective–or sometimes ever effective–in the B2B space.
  • “Where are we now and what are we learning?” Hemann maintained that the long-term vision of the team and of the company was not to build a system that would transcend the ages–they wanted to keep in mind the elasticity needed for new tools and new methodologies to be created. This vision was and is critical: without a vision, and without a mission, there is very little starting ground of which to speak. Hemann and his team had to determine what they wanted from their project (to enable Intel to become a best-in-class data-driven global marketing organization), as well as how they would achieve it (deliver relevant and timely insights to stakeholders using future-ready tools). Hemann noted that tools, however, are only ⅓ of the equation for success: without the right people and the right processes, the vision and the mission cannot be realized.
  • “Where are we going and what do we hope to learn?” Once mission and vision are established, benchmarks for success are necessary to determine how far a team has come and how far they have yet to go. Hemann’s team created not just one objective, but three separate objectives that would serve as checkpoints or progressive phases for a roadmap to future success: 1) expand scope, 2) communicate, educate, and deliver insights, and 3) establish a governance framework. Hemann emphasized that without a proper governance framework, no actions will be effective enough to achieve the desired objectives.

Intel

Overall, Hemann’s insights from the perspective of a Fortune 50 company were especially valuable to conference-attendees who may be struggling to determine if their own company’s digital analytics journey is headed in the right direction. His final takeaways for attendees were beneficial as general philosophies for anyone inside of the digital space: make collaboration a primary focus, and be patient. Rome was not built in a day, Facebook did not become a behemoth in a year, and building a department from the ground up–particularly when the premise of the department remains uncertain for the future–means to collaborate early, often, and over long periods of time to make it right.

0

Algorithmic Accountability

What the heck is Algorithmic Accountability anyways? It’s the concept of being transparent about the tools set up for social systems and the ethics around it. It means using algorithms to understand our social world.

This heady panel talked philosophically about the ethics, values, laws and implications of social systems and covered three specific examples of how this impacts our daily lives.

Facebook Trending Topics

Facebook was recently accused of bias against conservative news stories in its trending topics feature. However, upon further investigation it was found that there were a team of five journalists reviewing a set of data, interpreting it, and making individual judgements. They were drawing on 10 select news sources. The real story was not about Facebook but more about human intervention into the trending topics.

Panelist Alison Powell of the London School of Economics and Political Science insisted that in this example, it is important to understand and dig into the questions of: who are the people making these decisions? How did they get trained? Who are they accountable to? She emphasized the importance of acknowledging the complex nuances of these situations.

Predictive Policing

Panelist Josh Kroll of CloudFlare explained that every algorithm has a bias and used the example of predictive policing to shine the light on the potential ethical dilemmas that come with this kind of data usage. One type of predictive policing is using analytical techniques to identify potential criminal activity, for example, when there are not enough police to patrol a certain area, a police department will rely on patterns of prior arrests to determine where to put police officers.

A related example in the criminal justice system is a judge using data models to learn how likely an offender might be to reoffend when they are up for parole. These data models, however, include biases around minorities and ethnic groups. The ethical questions raised around this include – is this appropriate use of the data? How do we build mechanisms to address and reveal these issues?

IMG_4822

Credit Scoring

The third example covered the topic of credit scoring: when credit agencies use an algorithm, their “secret sauce,” to determine a credit score. Kroll said that the agencies don’t want consumers to know what that is, but they do want consumers to know that it’s the same across the board — everyone gets the same treatment.

This use of algorithms brought an FTC review of FICO score practices on whether they are discriminatory. It was found that they were not, but the report took four years to be published – which Kroll called, “an unacceptable amount of time.” The FTC is a trusted public entity and needs to be held accountable to share findings in a reasonable amount of time.

Key Takeaways

Moderator Farida Vis asked the panelists to hone in on the key takeaways that attendees should remember.

From Allison:

  • Examine assumptions from the beginning. Make assumptions more clear.
  • Create a register of training data as a way to open up the black box.
  • Increased accountability means identifying the core values about what is behind decision making.
  • Design apps and processes that are trustworthy and not creepy!

From Josh:

  • Be open about what is going into these data models: move to transparency and away from the black box.
  • Differential privacy: in the new iOS 10, Apple is explaining more about how they are going to collect your data and how your privacy will be protected.
  • Ask these questions: What values should a system espouse? Does the data accurately reflect that true state of the world? How does the system reflect your values?
  • Some scholars don’t understand the technology and are afraid of it. This could lead to additional regulation which could hamper tool development. There is a dark side to this but we can manage it ethically.

As the panel wrapped, Vis acknowledged that this is a heavy topic but an important one that will undoubtedly get more attention in the years to come. She proposed that 2016 will be THE turning point where Algorithmic Accountability will become more prevalent and better understood.

0

Can Open Algorithms Lead to Better Data Ethics

In this pecha kucha presentation, Sean Gorman, CEO of Timbr.io, gave a high-level overview of how open algorithms and data ethics are connected.

IMG_4692

Sean raised several important questions:

  • As facial recognition software gets better, do we all just become barcodes?
  • When an algorithm makes a bad decision, who is accountable: the developer, the data scientist, the company that ‘owns’ the algorithm, or someone/something else?
  • What is classified as hate speech, and how can an algorithm identify hate speech?

Building on these questions, Sean posed this overarching question: Can open algorithms help us better understand data ethics?

In short, yes. As a first pass at answering this question, Sean explained that we must better understand how algorithms work. We know that every algorithm has bias. For example, algorithms that handle social media can lead to algorithmic racism, where an algorithm identified photos of people as animals, or algorithmic injury, where poor GPS data caused a four-car pile-up. If we know that algorithms have these types of bias, we can potentially address this bias through open algorithms in the following ways:

  • We can learn from companies like Google, Microsoft, Facebook, which are experimenting with open algorithms
  • “Real time notebooks” and dashboards can empower data scientists with better information
  • We must better understand our own bias in order to better understand algorithms
0

From Machine Learning to Deep Learning

This interview with Elliot Turner, Director of Alchemy and Discovery at IBM Watson, focused on IBM’s cognitive computing technology. IBM Watson’s goal is to use the technology to impact the world, not only by providing businesses with new opportunities, but by improving the world’s healthcare system, helping governments manage risk, enhancing fraud detection, and many other areas. Doing this requires a deep investment, including billions of dollars in capital and hundreds of PhD researchers.

When asked whether Watson was a product or technology, Elliot explained that while celebrities interacting with Watson on Jeopardy might personify it as an entity with almost human characteristics, IBM Watson thinks of it as hundreds of different technologies that they have brought down from the “ivory tower” and given to customers. He explained that it’s also a series of products, a stack of capabilities built upon capabilities.

IMG_4594

Elliot explained that there are three critical components required to impact the world of cognitive technology: algorithms, computer, and data. Compute means running massive simulations of how the brain processes and learns. IBM thinks of data, as do their competitors, as the new oil in the world’s economy. If properly mined, data represents massive opportunities.

Many companies have been accumulating large amounts of data assets, but only actualizing a small trickle (less than 15%) in the form of structured data. Unstructured data, such as emails, chats, comments, images, and videos have been accumulated and stored, but have become liabilities because organizations have not been able to actualize and take advantage of the data.

Elliot gave an example of how they are working on helping companies to utilize unstructured data with really interesting outcomes. Weather in a hyperlocal context has a massive impact on the way the world works, such as affecting traffic and certain types of crimes. When a drought in Africa is broken by a lot of rain, it significantly increases the potential for a cholera outbreak. With proper medication, the death rate from cholera is less than 1% death; without medication, the death rate is over 50%. By combining social listening with weather, systems can detect signals from the world about what is going on and identify opportunities for risk reduction, profit, and also impacting the public good.

When asked about private and public data and whether organizations should share data, Elliot responded that although competitive advantage concerns prevent organizations from  coming together, there are safe ways to share data. IBM feels that taking cognitive data and hiding it in a data center is preventing others from reaping its benefits. To this end, they use the Watson Developer Cloud to properly anonymize data and put it on the cloud, making it available to tens of thousands of developers so they can incorporate cognitive into their work. Elliot advised that if you can take part in the shared ecosystem, you should and that we should all work together.

On the topic of how these systems limit human bias, Elliot explained that when they started to learn with systems that were trained by humans, they ran into biases, emotions, and uniquenesses. One of the techniques they’ve leveraged to address this is unsupervised learning. Traditional learning involves a teacher or trainer; however, unsupervised learning is gained by being exposed to the world and deriving a mental model of how the world works. This approach enabled them to scale up reasoning systems and train Watson by exposing it to social media posts. At the same time, they wondered about how human interactions would affect the model.

To research this, IBM Watson created a system that crawled the web, looking at news articles, posts, nearly everything that was written, to form a mental map of the world. After a day, they paused to see what the system had learned. In fact, it had learned hundreds of millions of things, from facts about celebrities to X-ray crystallography. The system also learned that dogs, based on what it had read, were a type of person. Because many people think of dogs as their children, there is certain context in the world that supports the truth of dogs being a type of human. Systems have to be able to have multiple perspectives simultaneously, to understand biases, but also work against them.

Elliot then addressed the major challenges he sees coming in cognitive computing within the next three to five years and how it will be used in new areas. He said that empathy, sarcasm, and the totality of the human experience are challenges that IBM Watson is working on. He predicted that cognitive will be embedded in a vast array of the world’s economy, business, and healthcare, a “dark horse” that will drive a lot of progress and change. For example, cognitive computing will be used to improve medical errors and their impact on public health and mortality, as well as affecting the inadvertent starting of wars. He ended by stressing that while the technology will help people develop better products and services, it is really about making the world a better place.

0

Dark Social

This panel dug deeper into the visibility of social sharing. How do we bring dark social into the light? The term itself spawned from the idea of “Dark Matter” – which is matter and energy that cannot be seen but which exerts a powerful force on the universe.

Dark social is a term coined by Alexis C. Madrigal, a senior editor at The Atlantic, to refer to the social sharing of content that occurs outside of what can be measured by Web analytics programs. – Wikipedia

Led by moderator, Mark Josephson, the panel batted around the good, the bad and the ugly of all things dark social. One overarching theme was that marketers need to demand that all things are measured – they need to optimize what they are doing. While one thinks they are accurately measuring, the truth is they are not. The big chunk of direct traffic is not what users think it is.

So where does it come from? The user didn’t type in a URL, there is no bookmark – this thing showing up in their analytics as direct traffic is being miscalculated. The notion is that there is a referrer but it’s not known because it’s coming from a platform or app that cannot be measured, like Slack or Instagram for example.

To make matter worse, the problem is not going away but instead will only mushroom because of the rapid rise of new platforms and messaging apps. Adding complexity to this is that most of the activity is taking place on mobile and most analytics programs were not originally designed for mobile measurement.

IMG_4552.jpg

According to panelist, Brewster Stanislaw, from Simply Measured, he estimated that analytics tools are potentially missing anywhere from 70-80% of P2P social sharing. (Other estimates from the panel ranged from 20-60%.) He explained that the traffic is being miscounted over 50% of the time – marketers think it’s direct traffic but it’s not. It’s links being shared over text or in apps or in ways that cannot be measured.

When asked about the battle to find referrers, Matt Thompson from Bitly broke down the three phases of the market as follows:

  1. First, the term dark social was identified as a real thing. The term was coined and marketers became aware of it.
  2. Next, came the Facebook era where users realized that Facebook was not getting the credit it deserved as a referrer because link sharing was not trackable.
  3. Finally, the third phase of the market is where users are now where they are seeing that chat apps and IM traffic is also a source of dark social and how can they parse user agents to be more accurate.

The moderator pushed the panel to identify specific ways analytics companies can help solve these challenge. For starters, education and awareness needs to be stronger about the fact that dark social is a problem. Once enough people raise the alarm and acknowledge that they are not measuring accurately, the industry will continue to find new ways to solve the problem.

As Josh explained, “Once companies find out that most of their traffic is not being counted accurately – the flip out. And they want it fixed immediately.”

Brewster emphasized the importance of the problem by explaining that the private peer to peer sharing – the link someone texts to their friend – is the “truest expression of organic intent from consumer.”

Another way of understanding how crucial this issue is to consider it a bottom of the funnel user activity. Social is typically at the top of the funnel but dark social is at the bottom meaning it’s closer to the user taking real action. 

A few tactical ideas for how to fix the issue are:

  • Aggressively leveraging UTM parameters
  • Using Bitly tracking features
  • Looking deeper at user agents

As the group wrapped up their discussion, the conversation moved towards the customer journey and how complex it is for marketers to measure now because of mobile devices and unattributed traffic. That said, everyone agreed that solving the challenge of dark social is critical because marketing teams must have access to accurate data in order to make decisions about their spend.

0

The Evolution of Visual Social Media

Two years ago, users were sharing 1.8 million images every day on social media. With 4 billion images now being shared daily, visual listening or “image intelligence” is fast becoming one of the hottest technologies in social media. Of the images containing a brand, 85% of them do not mention the brand in text, which means that if you are only analyzing on the basis of text, you are missing a huge swath of insight. This panel discussed how their respective organizations are pulling intelligence from images that can be used in predictive ways for business and research and how they see it evolving in the future.

Glen Szczypka, Principal Research Scientist for NORC at The University of Chicago, talked about using image recognition to study how media messaging and images affect public health concerns. With regard to tobacco use, they are looking at what images tobacco companies are using to attract customers; on the health side, they are working with organizations like the Truth Initiative to see what messages are effective in helping people stop smoking. He noted that it is particularly important to explore Twitter, Instagram, and other platforms where young adults are posting text and images related to smoking.

Visual 1

Ethan Goodman, Senior Vice President of Shopper Experience at The Mars Agency, helps Fortune 500 companies plan marketing activities with large retailers. For example, they are helping Campbell’s to sell more soup at Walmart, Kroger, and other stores. They use image recognition to get into the brains of customers and figure out how to market more effectively to them. Ethan is primarily focused on their customer’s clients in photos, as well as finding competitors’ logos. As they have evolved, they started combining image analysis with sentiment and object analysis. He noted that their agency is a customer of, and investor in, Ditto Labs.

Glen’s organization retrieves images from Instagram based on the basis of tags, such as #blunt when used as slang for inexpensive cigars. The tags both limit the data set to what is allowed by Instagram and also help to find objects, blunts, that can’t be recognized as easily as logos. They then used Ditto to recognize patterns in the image pixels to find logos (over 40% of the images they retrieve contain branded content). When analyzing which brands are featured in images tagged with #blunt, Swisher Sweets was by far the most common. This led Glen’s firm to look into whether the content was organic content or whether Swisher Sweets is encouraging people to post pictures of their blunts.

Ethan’s firm, by contrast, retrieves images without any text attributes. Looking through the vast stream of photos containing logos, they were interested and shocked by what they found–everything from traditional use cases, such as images with Campbell’s soup showing people making dinner to images related to a Vodka brand where people were putting Skittles into a Vodka bottle to make a lava lamp. Ethan then uses the technology to glean insight that can drive a creative idea or marketing strategy. He explained that it’s important to know what other brands are in a targeted brand’s advocacy set. He noted that in the near year, clients will be increasingly using this strategy to inform advertising decisions and ad targeting. You will be able to target someone who has been pictured with a particular logo a large number of times with a display ad for that brand.

As social media becomes increasingly visual, users are going to see more applications that feature buttons that allow them to learn or buy without having to leave an application. For example, a user will be able to click on a pair of glasses in an image, find retailers offering those glasses, and then even add them to their cart.

When asked how they train machines to find images, Glen explained that they train machines using an iterative process where a human will look at an image or text and identify it as, for example, “smoking” or “blunt.” After the machine finds images, they are compared with what the human found. If a bit off, they train the machine a bit more.

In terms of advice for companies wanting to start using image recognition, Ethan recommended that you start now. He noted that it is low risk to start. Think about how visual listening fits into your larger strategy, how it can complement your text listening tool. Glen noted that if you need to limit the data set, choose the tags you’re searching on carefully. For example, don’t use your brand as a keyword.

The panel also discussed ecosystem challenges related to the differences across platforms for data access. For example, users can generally pull images through APIs; however there are limits. A user can only access publicly-available images on Facebook. Instagram recently changed their terms so that a development license is required to use their API. Other social media platforms simply don’t allow users to access their images. Instead, one might not be able to pull 100% of what is available, but rather retrieve a solid representative sample.

You should also set expectations about not only what is available, but how long it takes to retrieve the data for analysis. Ethan mentioned that it takes between 3-7 days to train a machine to precisely identify a logo. Definitely something to keep in mind when thinking about your future image recognition efforts.

0

The Power of Live

Two days ago, the Speaker of the House shut down C-SPAN camera access on the floor of the House of Representatives, disallowing Democratic lawmakers the news coverage they wanted for the sit-in they staged in protest of firearm legislation that failed to get enough votes. Oddly, as Sara Haider, Client Engineering Lead at Periscope, pointed out, “Our technology is meant to sort of give a voice to the voiceless. But ironically, our lawmakers–at this point–were the ones who were voiceless.”

Haider was talking about the live-stream takeover orchestrated by four lawmakers who took to Periscope during the sit-in. Periscope is the live-streaming technology tool that is both a stand-alone app and also integrates with Twitter and is where live video news is known to break. When the lawmakers started broadcasting their sit-in, it created the biggest audience to a live-stream feed that Periscope had ever seen. Haider herself found out about the feed on Twitter, where the buzz just kept getting louder. Eventually, C-SPAN decided to start using the Periscope broadcast from the House floor as their aired footage; the Periscope team had to get in touch with the network in order to educate them on just how the tool should be used, including everything from basic use etiquette to changing the stream on television from portrait to landscape mode.

Periscope 1

Periscope, for those who are unfamiliar, is the hot live-streaming app that launched a thousand new competitors (or really, just a few large ones). Haider described the company’s vision as “a way to see the world through someone else’s eyes.” Users can download the app and start broadcasting live to anyone in the world and their feed can be joined by anyone, anywhere. Viewers can engage with the broadcaster by commenting or tapping their screen, which show the broadcaster visual “hearts” on-screen, encouraging them to continue their broadcast or agreeing with something they’ve said or done. Think finger-snapping at a poetry reading. Shortly after Periscope launched and began to catch fire, social giants like Facebook and Snapchat began creating their own tools for live-streaming, seeing the opportunity that clearly existed with this new type of audience.

Despite the large names that have suddenly become competition, Haider remains unfazed, indicating that, while Periscope currently operates on its own platform and integrates with only Twitter, they are willing and able to integrate across all social ecosystems. The brand seems to be working to build a loyal following that would be willing to use their specific live-streaming tool no matter which social media platform they’re using at the time.

That following of users is definitely building. Much like a Snapchat audience, many Periscope feeds are about the user themselves: who they are, what they’re doing at any given point in time. Many speakers and event marketers are using Periscope to create more attention and attendance for their conferences and seminars. A growing audience can also be found in Periscope communities; like-minded people are coming together for real-time meetup groups through the power of live-streaming, no matter where they happen to live. Where the internet gave individuals access to one another in conversation, Periscope gives them the ability to have coffee in a live meet-up group, physically watching and interacting with the broadcaster.

What does the future hold for Periscope? Live video apps may not seem as though they present a lot of opportunity for competitive differentiation, but there is plenty of unbroken ground to be discovered. Marketers should look for best practices guides as the tool grows in popularity; typical users should start to look for even more convenience of use and greater ecosystem integration and accessibility as time goes on. In any case, it is clear that the future of technology is in real-time visual; the data it can provide remains to be seen, but should most certainly be monitored closely!