Flip Back to the Future

By Tyler Singletary, BBI Board Member and VP & GM Klout and Consumer Data, Lithium (@harmophone, Klout, Lithium

Flipboard was the first killer app for the iPad — a beautiful new way to read content from across the web, without worrying about RSS feeds. It was an early innovator at leveraging social networks for personalization. And it just might be the canary in the coal mine in a number of ways.

While Facebook gets mired in a debate around its use of human curators, and again deeper in its use of algorithmic curation, Flipboard had already gotten us familiar with both — and without the backlash. It’s been awhile since one finds a completely irrelevant article, and it’s learned what sources you read, the topics you’re interested in, the amount of time you’re reading — nearly everything that optimizes a discovery and personalization system.

Flipboard’s native National Geographic experience, putting up the wall one brick at a time.

To accomplish this, we were lead through the gates of their walled garden. Don’t be afraid, they’re pretty good gardeners. Early Flipboard users remember a time when a decent amount of content wouldn’t render well. Limitations with the IOS browser, content crawling and classification, and inherent hardware problems presented some downright ugly content and application crashes. To drive a more consistent user experience, Flipboard introduced a few features that sound familiar in today’s world of OpenGraph tags, Instant Articles, Twitter Cards, and Pinterest optimization: custom Flipboard CSS tags and a content partnering platform with select publishers — to beautify and provide a native experience to users. Flipboard became a platform. It also gave them their first ad monetization opportunities.

Walled gardens aren’t usually made with bad intentions. Most people are just happy to see the foliage. It tends to rankle the open web purists though, and certainly gives engineers a lot more to consider. What if web standards are behind? What if users don’t actually care about a standards-compliant, utopian free web, and just want to see their content? Then there are the paywalled gardens, like the Wall Street Journal and The New York Times. While germinated from a more protectionist point of view, they’re fighting for survival. Users don’t want to see ads, but they want content for free. Open Web enthusiasts seem happy with simple RSS readers and ignore the window dressing that content creators have the right to be viewed. Flipboard remains one of the only platforms purely dedicated to aggregated content discovery and consumption, with Apple News taking their playbook since IOS9. In this game, the platform with the most eyeballs and the best relationships with publishers will win.

And this is where Flipboard may be an early warning system. We’ve seen a number of other “newsreader” tools fall by the wayside and be shut down, already. If Flipboard loses this battle, it isn’t because of their approach to personalization — it worked, and social data had one of its first proof of concepts (and corrections: Flipboard was smart enough to ask its users about their interests, too). It isn’t because of their deals with publishers — that was survival, good business sense, and a model Facebook and Apple adopted. But its peer-to-peer social features and curated magazines, perhaps its only unique features against Apple (with Nuzzelon its heels), aren’t essential to the experience. Apple has won these arms wars in the past through shrewd licensing. Ironically, Open Web concepts may be the saving grace that keeps Flipboard in the game.

Facebook is betting users will only care about the news as a passing interest amongst their friends, with Twitter focused on breaking the news, not just sharing it. LinkedIn thinks it’s part of an authored thought leadership forum. Apple will integrate it tighter with iMessage, but gave us the “just the facts, ma’m” version. If anyone dies in this mine, it’s not because there’s gas poisoning, it’s because there just isn’t enough clean air to breath. The users will read pretty, relevant content wherever and however it appears. The more seamless, the better.

How Digital Curation Enhances the Value of Social Data

By Leigh Fatzinger, CEO, Turbine Labs (@lfatzinger, @turbinelabs)

Over the last 10 years, the social data market sector has enabled a multitude of ways to understand how audiences interact with brands, organizations, political candidates, governments, and more.  Social data platforms have expanded in functionality and complexity through investment and industry consolidation, while simultaneously adjusting to new and evolving data sources. In the case of Facebook and Twitter, the availability or restricted use of existing data sources has required platforms to divert from their original product roadmaps. Even with the changing data access landscape, social data platforms have access to a staggering amount of consumer and media content – data that needs to be collected, filtered, and processed into a usable format.

From an innovation perspective, and as a response to the amount of data available, much attention has been paid to enhancing and simplifying the user experience of these platforms with the goal of attracting and maintaining the widest possible audience of analysts, researchers, brand managers, subject matter experts, and others.

Attention has also been given to automating, as much as possible, the results delivered by these platforms once configured for an entity or use case. Fulfilling the ‘ease of use’ benefit that many platforms tout as differentiators, users have come to expect that producing and consuming useful insights should require no more than one or two clicks of a mouse.

At the same time, users of social data platforms continue to face headwinds when it comes to answering key value-oriented questions: What should we be measuring? What are the right KPIs? What is the expected outcome of the data we collect? Do reports generated by our chosen platform align to business goals? Are these insights actionable?

Access to massive amounts of data, the pressure users have placed on platform developers to simplify user experience, the expectation of automation, as well as the near real-time need for actionable intelligence, is driving the market to an inflection point – an inflection point that will change how these platforms are used to justify their investment.

Today, new questions are emerging that focus more on topical context and relevancy rather than vanity metrics such as audience growth and engagement rates. Yes, users of these platforms continue to measure, with good reason, how many shares and retweets their owned content generates. They continue to count earned media placements. They continue to plan and generate content with an expectation of virality.

But increasingly, brands, organizations, and governments are realizing that the definition of insights is achieved through a granular, contextual understanding of how audiences respond to a campaign or topic. Users need to be able to quickly and efficiently digitally curate massive amounts of data in a very short time to be able to extract truly relevant and actionable insights from the data.

Digital curation begins by configuring and tuning social data platforms to listen not only for a brand, organization, candidate, etc., but to categorize media and consumer conversations on a campaign-by-campaign or topic-by-topic basis. The output of these categorizations enables an analyst or researcher to make a baseline comparison against the total conversation as well as understand the overall sentiment of the topic.

The real value of digital curation comes from leveraging software to enable humans to quickly analyze and process a subset of the categorized data to determine the tone, narrative, and impact of the campaign or topic as a whole. The software offers access to the data, while humans extract unique, contextual elements of the data to make it useful and actionable. Through digital curation, the reporting of insights becomes more than just raw performance numbers on a campaign or topic. Results can be presented in a more persuasive way by presenting stakeholders with what consumers, media, and competitors are actually saying within the context of a topic – similar to a comment card.

By integrating digital curation tools and processes into today’s highly advanced social data platforms, users can more quickly define what should be measured and what should be ignored. They can settle in on a concise, realistic set of KPIs. They can align social data more succinctly to business goals. And, most importantly, they can justify the investment in social data by finding unique ‘needles in the haystack’ that often cannot be found via any other type of business intelligence or research platforms.

Upcoming BBI Events!

BBI Community –
We’re excited to announce the dates for Big Boulder 2017, BBI Meetups in September and an amazing event for data-driven digital marketers at Coca-Cola HQ!

Big Boulder 2017
The 6th annual Big Boulder will be held June 1st & 2nd. The only way to ensure your attendance is to join the Big Boulder Initiative. You can register here and tickets will be made available in April. I’ll send out an email with instructions and once the date is closer, please check bigboulderconf.com for updates.

BBI Meetups
We’re happy to co-sponsor a series of Meetups this fall in San Francisco, New York, Washington DC and London. Details are below.

Data: It’s The Real Thing hosted by Coca-Cola and Sponsored by BBI

This conference will provide a holistic perspective on how marketers can utilize data. BBI will be sponsoring the social data presentation and we’re excited to be a part of this amazing event. Some of the tentative session titles/topics are:

  • What’s My Name? A day in the life of data: What data is worth purchasing? So many vendors where does one start?
  • 99 Problems: Programmatic & Direct Advertising: How does the industry toe the line? Am I over or under invested?
  • The Symphony: Are you listening? Making Sense of Social Data
  • Paid in Full: Closing the loop for CPG companies: measuring marketing w/ dollars and sense
  • DMPs and the New World Order: How marketers from various industries are leveraging Data Management Platforms.
  • The Revolution Will Not Be Televised: Where to invest in linear, digital video or addressable?
  • Follow the Leader: Making the most of mobile (targeting, marketing and measurement)
  • A Good Day: Telling Better Stories with Data

The event will be held on November 17th, at The Roberto Goizueta Auditorium, Coca-Cola HQ. The address is 1 Coca-Cola Plaza, Atlanta, GA 30313. Please email me at mike@bbi.org for more information.

Thoughts from Big Boulder

Big Boulder 2016 is a wrap! We are grateful to the speakers, panelists and moderators for sharing their ideas and insights. And we are, of course, grateful to all of you who attended and made this not just a conference, but a community.

Our Big Boulder 2016 emcees, Mark Josephson, CEO of Bitly, and Farida Vis, Director of the Visual Social Media Lab at Sheffield University, shared several overarching themes that reverberated throughout this year’s event:

  • Big Boulder reveals what we don’t know, what we should know, and what we could know.
  • The evolving role of platformsfrom machine learning to algorithms to bots—is fundamentally changing how we think about ourselves as human beings.
  • “It’s the power of us,” said Mark. The biggest problems in the world get solved by networks, not by one person sitting alone in a room. And everyone at Big Boulder is helping to move our industry forward.

“The conference is about what happens on stage and perhaps even more, what happens across the community of attendees off-stage,” said Chris Moody, VP of Data Strategy at Twitter and Chairman of the Board of Big Boulder Initiative. “This room has high concentration of a group of people that can change an industry!”

You can view of all the blog coverage from Day 2 of Big Boulder 2016 below:

As we close the fifth year of Big Boulder, we plan to extend the insights and energy at this year’s event into opportunities to connect year-round.

We invite you to get involved in our Slack community and consider joining BBI as a member of our industry organization.

Thank you for an incredible Big Boulder 2016!


The Future of Bots

This session began by listening to a fascinating conversation between Sam Mandel, the Operating Partner at betaworks, and Chris Messina, the Developer Experience Lead at Uber. The VP and GM Klout and Consumer Data at Klout, Tyler Singletary, moderated this interaction. So how are things really developing in this space? More has changed within the bot world in the last 6 months than in the past 10 years. As process and growth has ramped so up quickly, it only made sense that the last panel of Big Boulder 2016 would be looking into the future of bots and their impact on human interaction.

Bots 3

In the existence of bots, they have had a “science-fiction fascination” behind them which has people interested; it’s now sexy to talk about bots. Messina commented that bots are exciting because there is a shift now of making computing more accessible to a wider range of people. Which makes sense, seeing how the number of consumers on the web only grows every day. And not only on desktops. Messina stated that Mark Zuckerberg estimates that the next generation of web users to come online will primarily be using phones and other mobile devices.

This all brings up several question for the user. Should bots be inside messenger apps at this moment? Should you build it and they will come? Or will the creators and implementers of bots have to also acclimate the users?

Users’ expectations guide these decisions as well. As Messina commented to Mandel, his experience with Poncho, an app that delivers customizable weather forecasts, is that the program does not learn as fast as he’d like it to. Which is to be expected, as there is much “grey area” that is encountered in this new industry, as Mandel puts it.

Sam Mandel went on to explain that the majority of consumers do not fully understand the world of bots or how to use them properly. The act of stepping back, creating simpler processes, and having more direct training is crucial in moving forward in the field. To put it simply, it’s different talking to a bot than it is talking to a person. Bots are just less capable. However, it should be noted that, since most people are now trained in executing Google searches, the Google search engine has now improved in understanding specific and customized requests. The “one-size-fits-all” approach doesn’t cut it anymore now that the dynamic is shifting.

Looking ahead, the future of bots lies with the constant change that the world has experienced in the past 10 years. There will always be room for improvement, for better connections between humans and applications. As users move to a “much more diffuse world,” the tech will be more and more align with how humans interact in the world today. The potential is there, and one can only wait in anticipation to see where it leads them.

Messaging in the World of Bots

When moderator Chris Moody, VP of Data Strategy at Twitter, polled the Big Boulder audience, about 15% of the group had used Kik, a mobile messaging app. Michael Roberts, Head of Chat at Kik, says that 15% is a good show of hands in an audience like Big Boulder. That’s because Kik is used mostly by teens in the United States. In fact, of the 300 million registered Kik users, 40% are U.S. teens. Michael said that high usage is because teens are the first generation of digital- and mobile-native users and naturally know how to interact with bots.


Anonymity vs. Pseudonymity

Kik empowers users to form an identity inside the app. Rather than anonymity, which is meant to strip identity from a user, pseudonymity allows users to form any identity they choose. The average Kik user spends 87 minutes inside the app every dayan amount of time that allows users ample time to develop a persona and express themselves.

Bots are Hot

“Right now there are a lot of trends coming together at the same time, including NLP, AI, and machine learning,” Michael explained. Bots are hot because they aren’t only about those trends, but instead are about trying trends together seamlessly inside of messaging apps.

The popularity of bots makes sense also because it’s easy to reach a digital-native audience. Bots are an interface that digital natives already understand. There’s no learning curve, which allows companies to reach these users easily.

Bots are hot also because they let companies build inside of apps that are already on mobile phones, providing a huge opportunity to reach customers. Bots are another tool in a mobile developer’s toolkit. “The future isn’t putting bots in products,” said Michael. “The future is building better products.”

Beyond One-to-One Conversations

Bots aren’t just a one-to-one conversations between a user and a bot. Far from it. For example, using the “@mention” bot in an app, like the one that exists in Kik, allows a user to pull a tic-tac-toe game into a conversation between friends. It’s not about talking with bots directlyit’s about adding bots into existing conversations.

Games are another opportunity for bots to engage users in a broad way. According to Michael, 40% of all app content in the app store is a game, and 80% of revenue on mobile is through games. Games provide a tremendous platform for bots to move beyond one-to-one conversations and to be a seamless, natural part of the gaming experience.

Retailers with brick and mortar stores can bridge the gap between physical space and digital community using bots. Companies such as Sephora and H&M are using bots to connect with users, even when those users aren’t at a store.

Privacy, Control and Bots

As surfaced in other panels during Big Boulder, bots raise the question of how much messaging to a user is too much. Michael described that at Kik and in Kik’s Bot Shop, a balance of retaining a user by notifying them about activities, but not overly spamming users with too many bots is a careful balance.

Kik also thinks about users in terms of trust, intimacy, and control. The company tailors experiences for users so that users retain privacy and safety.

Measuring Bot Success

Bot messaging is unlike any other app platform, according to Michael. Common app metricslike MAU (Monthly Average Users) or quantity of app downloadsaren’t relevant to bots. Instead, Kik uses chat sessions as a better metric to measure success. Chat sessions reveal how long a user is inside a chat, how active the conversation is, and what other bots users bring into the messaging platform.

The future of bots is likely to feature bots not just as a single, siloed tool but instead as a platform-agnostic way to engage users.


Building Digital Analytics Capabilities in a B2B World

The final Pecha Kucha talk of the conference was delivered by Chuck Hermann, Director of Digital Analytics at Intel, who gave the audience a rundown of what it’s really like to build a Digital Analytics platform and team for a Fortune 50 company. In 2014, Hemann was asked to join the Intel team to begin a new department and transition the company from the B2C to the B2B world. Hemann covered the life cycle of this project over the course of the last two years, addressing three specific focus areas for those who may be undertaking similar projects in the future:

  • “Where did we start and what did we learn?” Two years ago, not only was there no Digital Analytics team to speak of at Intel, but their digital measurement framework was also extremely elementary for what they were trying to accomplish: one based on clicks, but not on attitudes. Internal reports were rarely consumed–most people weren’t even aware they existed. When Hemann arrived and began his work, he immediately hired senior-level talent with a wide range of skills to hit the ground running. Even with a high-caliber team with plenty of experience, he noted that changing a measurement framework is at least a 6-9 month process, and that’s if it’s all done well, start to finish. Another note of importance: what is effective in the B2C space is not always effective–or sometimes ever effective–in the B2B space.
  • “Where are we now and what are we learning?” Hemann maintained that the long-term vision of the team and of the company was not to build a system that would transcend the ages–they wanted to keep in mind the elasticity needed for new tools and new methodologies to be created. This vision was and is critical: without a vision, and without a mission, there is very little starting ground of which to speak. Hemann and his team had to determine what they wanted from their project (to enable Intel to become a best-in-class data-driven global marketing organization), as well as how they would achieve it (deliver relevant and timely insights to stakeholders using future-ready tools). Hemann noted that tools, however, are only ⅓ of the equation for success: without the right people and the right processes, the vision and the mission cannot be realized.
  • “Where are we going and what do we hope to learn?” Once mission and vision are established, benchmarks for success are necessary to determine how far a team has come and how far they have yet to go. Hemann’s team created not just one objective, but three separate objectives that would serve as checkpoints or progressive phases for a roadmap to future success: 1) expand scope, 2) communicate, educate, and deliver insights, and 3) establish a governance framework. Hemann emphasized that without a proper governance framework, no actions will be effective enough to achieve the desired objectives.


Overall, Hemann’s insights from the perspective of a Fortune 50 company were especially valuable to conference-attendees who may be struggling to determine if their own company’s digital analytics journey is headed in the right direction. His final takeaways for attendees were beneficial as general philosophies for anyone inside of the digital space: make collaboration a primary focus, and be patient. Rome was not built in a day, Facebook did not become a behemoth in a year, and building a department from the ground up–particularly when the premise of the department remains uncertain for the future–means to collaborate early, often, and over long periods of time to make it right.

Algorithmic Accountability

What the heck is Algorithmic Accountability anyways? It’s the concept of being transparent about the tools set up for social systems and the ethics around it. It means using algorithms to understand our social world.

This heady panel talked philosophically about the ethics, values, laws and implications of social systems and covered three specific examples of how this impacts our daily lives.

Facebook Trending Topics

Facebook was recently accused of bias against conservative news stories in its trending topics feature. However, upon further investigation it was found that there were a team of five journalists reviewing a set of data, interpreting it, and making individual judgements. They were drawing on 10 select news sources. The real story was not about Facebook but more about human intervention into the trending topics.

Panelist Alison Powell of the London School of Economics and Political Science insisted that in this example, it is important to understand and dig into the questions of: who are the people making these decisions? How did they get trained? Who are they accountable to? She emphasized the importance of acknowledging the complex nuances of these situations.

Predictive Policing

Panelist Josh Kroll of CloudFlare explained that every algorithm has a bias and used the example of predictive policing to shine the light on the potential ethical dilemmas that come with this kind of data usage. One type of predictive policing is using analytical techniques to identify potential criminal activity, for example, when there are not enough police to patrol a certain area, a police department will rely on patterns of prior arrests to determine where to put police officers.

A related example in the criminal justice system is a judge using data models to learn how likely an offender might be to reoffend when they are up for parole. These data models, however, include biases around minorities and ethnic groups. The ethical questions raised around this include – is this appropriate use of the data? How do we build mechanisms to address and reveal these issues?


Credit Scoring

The third example covered the topic of credit scoring: when credit agencies use an algorithm, their “secret sauce,” to determine a credit score. Kroll said that the agencies don’t want consumers to know what that is, but they do want consumers to know that it’s the same across the board — everyone gets the same treatment.

This use of algorithms brought an FTC review of FICO score practices on whether they are discriminatory. It was found that they were not, but the report took four years to be published – which Kroll called, “an unacceptable amount of time.” The FTC is a trusted public entity and needs to be held accountable to share findings in a reasonable amount of time.

Key Takeaways

Moderator Farida Vis asked the panelists to hone in on the key takeaways that attendees should remember.

From Allison:

  • Examine assumptions from the beginning. Make assumptions more clear.
  • Create a register of training data as a way to open up the black box.
  • Increased accountability means identifying the core values about what is behind decision making.
  • Design apps and processes that are trustworthy and not creepy!

From Josh:

  • Be open about what is going into these data models: move to transparency and away from the black box.
  • Differential privacy: in the new iOS 10, Apple is explaining more about how they are going to collect your data and how your privacy will be protected.
  • Ask these questions: What values should a system espouse? Does the data accurately reflect that true state of the world? How does the system reflect your values?
  • Some scholars don’t understand the technology and are afraid of it. This could lead to additional regulation which could hamper tool development. There is a dark side to this but we can manage it ethically.

As the panel wrapped, Vis acknowledged that this is a heavy topic but an important one that will undoubtedly get more attention in the years to come. She proposed that 2016 will be THE turning point where Algorithmic Accountability will become more prevalent and better understood.

Can Open Algorithms Lead to Better Data Ethics

In this pecha kucha presentation, Sean Gorman, CEO of Timbr.io, gave a high-level overview of how open algorithms and data ethics are connected.


Sean raised several important questions:

  • As facial recognition software gets better, do we all just become barcodes?
  • When an algorithm makes a bad decision, who is accountable: the developer, the data scientist, the company that ‘owns’ the algorithm, or someone/something else?
  • What is classified as hate speech, and how can an algorithm identify hate speech?

Building on these questions, Sean posed this overarching question: Can open algorithms help us better understand data ethics?

In short, yes. As a first pass at answering this question, Sean explained that we must better understand how algorithms work. We know that every algorithm has bias. For example, algorithms that handle social media can lead to algorithmic racism, where an algorithm identified photos of people as animals, or algorithmic injury, where poor GPS data caused a four-car pile-up. If we know that algorithms have these types of bias, we can potentially address this bias through open algorithms in the following ways:

  • We can learn from companies like Google, Microsoft, Facebook, which are experimenting with open algorithms
  • “Real time notebooks” and dashboards can empower data scientists with better information
  • We must better understand our own bias in order to better understand algorithms

From Machine Learning to Deep Learning

This interview with Elliot Turner, Director of Alchemy and Discovery at IBM Watson, focused on IBM’s cognitive computing technology. IBM Watson’s goal is to use the technology to impact the world, not only by providing businesses with new opportunities, but by improving the world’s healthcare system, helping governments manage risk, enhancing fraud detection, and many other areas. Doing this requires a deep investment, including billions of dollars in capital and hundreds of PhD researchers.

When asked whether Watson was a product or technology, Elliot explained that while celebrities interacting with Watson on Jeopardy might personify it as an entity with almost human characteristics, IBM Watson thinks of it as hundreds of different technologies that they have brought down from the “ivory tower” and given to customers. He explained that it’s also a series of products, a stack of capabilities built upon capabilities.


Elliot explained that there are three critical components required to impact the world of cognitive technology: algorithms, computer, and data. Compute means running massive simulations of how the brain processes and learns. IBM thinks of data, as do their competitors, as the new oil in the world’s economy. If properly mined, data represents massive opportunities.

Many companies have been accumulating large amounts of data assets, but only actualizing a small trickle (less than 15%) in the form of structured data. Unstructured data, such as emails, chats, comments, images, and videos have been accumulated and stored, but have become liabilities because organizations have not been able to actualize and take advantage of the data.

Elliot gave an example of how they are working on helping companies to utilize unstructured data with really interesting outcomes. Weather in a hyperlocal context has a massive impact on the way the world works, such as affecting traffic and certain types of crimes. When a drought in Africa is broken by a lot of rain, it significantly increases the potential for a cholera outbreak. With proper medication, the death rate from cholera is less than 1% death; without medication, the death rate is over 50%. By combining social listening with weather, systems can detect signals from the world about what is going on and identify opportunities for risk reduction, profit, and also impacting the public good.

When asked about private and public data and whether organizations should share data, Elliot responded that although competitive advantage concerns prevent organizations from  coming together, there are safe ways to share data. IBM feels that taking cognitive data and hiding it in a data center is preventing others from reaping its benefits. To this end, they use the Watson Developer Cloud to properly anonymize data and put it on the cloud, making it available to tens of thousands of developers so they can incorporate cognitive into their work. Elliot advised that if you can take part in the shared ecosystem, you should and that we should all work together.

On the topic of how these systems limit human bias, Elliot explained that when they started to learn with systems that were trained by humans, they ran into biases, emotions, and uniquenesses. One of the techniques they’ve leveraged to address this is unsupervised learning. Traditional learning involves a teacher or trainer; however, unsupervised learning is gained by being exposed to the world and deriving a mental model of how the world works. This approach enabled them to scale up reasoning systems and train Watson by exposing it to social media posts. At the same time, they wondered about how human interactions would affect the model.

To research this, IBM Watson created a system that crawled the web, looking at news articles, posts, nearly everything that was written, to form a mental map of the world. After a day, they paused to see what the system had learned. In fact, it had learned hundreds of millions of things, from facts about celebrities to X-ray crystallography. The system also learned that dogs, based on what it had read, were a type of person. Because many people think of dogs as their children, there is certain context in the world that supports the truth of dogs being a type of human. Systems have to be able to have multiple perspectives simultaneously, to understand biases, but also work against them.

Elliot then addressed the major challenges he sees coming in cognitive computing within the next three to five years and how it will be used in new areas. He said that empathy, sarcasm, and the totality of the human experience are challenges that IBM Watson is working on. He predicted that cognitive will be embedded in a vast array of the world’s economy, business, and healthcare, a “dark horse” that will drive a lot of progress and change. For example, cognitive computing will be used to improve medical errors and their impact on public health and mortality, as well as affecting the inadvertent starting of wars. He ended by stressing that while the technology will help people develop better products and services, it is really about making the world a better place.