Brought to you by Vanta
Streamline your security compliance to accelerate business growth. With its market-leading automated compliance platform, Vanta helps your business scale and thrive while reducing the need for countless spreadsheets and endless email threads.
With Vanta, you can:
Automate up to 90% of compliance for SOC 2, ISO 27001, GDPR, HIPAA, and more, getting audit-ready in weeks instead of months
Save hundreds of hours of manual work and up to 85% of compliance costs
Utilize a single platform for continuously monitoring your controls, reporting on security posture, and streamlining audit readiness
Watch Vanta’s 3-minute demo to learn more.
Actionable insights
If you only have a few minutes, here’s what investors, operators, and founders should know about Intercom’s AI evolution.
Working with LLMs. The hard part of working with an AI model like GPT-4 isn’t telling it what to do; it’s telling it what not to do. Intercom co-founder and Chief Strategy Officer Des Traynor highlighted this as one of the key difficulties in shipping a new AI chatbot, Fin.
Rabid demand. According to Traynor, Fin has seen “insane demand” since its March debut. Companies are enticed by a chatbot capable of resolving 50% of queries, freeing up support teams to focus on thornier issues.
Changing margins. Relying on providers like OpenAI costs money. That means companies building on top of AI models must keep cost in mind, especially when operating at Intercom’s scale. Traynor’s firm has had to reassess its shipping strategy to accommodate this cost structure shift.
Upside-down. The customer support segment is being turned “upside down,” according to Traynor. The advent of reliable, broadly intelligent AI bots like Fin means that teams are increasingly shifting toward a “first and last” model – answering a question for the first time and last time. This is a small example of how the broader support industry will likely be disrupted.
No bolt-ons. Companies being disrupted by AI can’t take half measures, per Traynor. Simply “bolting on” a few generative features – as many startups have chosen to do – isn’t enough to separate oneself from the chasing pack. Intercom is choosing to reinvent itself from top-to-bottom, becoming an AI-native business.
When you hear the word “pivot,” what comes to mind? Traditionally, that term is applied to small startups iterating and experimenting in a quest for product-market fit. Shopify, for example, began life as an online store for snowboarding gear. Or YouTube, which started as a dating service – users uploaded videos talking about their ideal partner in the hopes of meeting their match. The platform’s slogan was “Tune In, Hook Up.”
Though early-stage companies are no strangers to pivoting, the truth is that even the world’s largest businesses must undergo significant changes to remain on top. Not all bear the abrupt torque of a pivot, but they amount to the same thing: an enterprise embarks on a transformation and becomes a different business. Around 2012, Microsoft began its metamorphosis into a cloud behemoth – a radical corporate renewal. Six years later, fellow giant Apple started making a concerted push into services, diversifying revenue away from hardware. In the 2022 fiscal year, the App Store, Apple Music, Arcade, Fitness, and TV delivered $78 billion for Tim Cook’s firm.
Often, these shifts are precipitated by technological change. That might be the emergence of cloud computing, as in Microsoft’s case, or the hazy promise of the metaverse, which Facebook has swerved to meet. The stakes are high in such instances; these are “bet the company” moments: catch the wave at the right time and reap the rewards; miss it and die.
This year marks the definitive beginning of a new, innovation-driven shuffle step: the “AI pivot.” As large language models (LLMs) demonstrate their remarkable potential, large tech companies are racing to avoid obsolescence and capitalize on fresh momentum. Adobe, Canva, Discord, and Notion are among the established players using AI to add new functionality to their platforms. Meanwhile, Alphabet is in the midst of converting itself from a company that uses AI into an AI company. (And, of course, Microsoft is getting into the act once more, thanks to its links with OpenAI.)
We can understand the changes these businesses make in the abstract, but what does committing to a pivot like this really mean? How does a mature, highly-valued enterprise shift its center of gravity, wrap itself around a new, artificially intelligent engine? What changes in product strategy, cost structure, and competitive dynamics does such a transformation invite?
To answer these questions, and many others, we sat down with Intercom co-founder and Chief Strategy Officer Des Traynor. While all the businesses mentioned thus far are interesting case studies, Intercom is a particularly apt focus for this investigation.
Firstly, it is an undoubtedly established business. Since its founding in 2011, the customer support platform has attracted $241 million in capital from leading investors like Kleiner Perkins and reached annual revenue of $200 million by some estimates. Reconfiguring an entity of this magnitude is not simple.
Secondly, and most importantly, Intercom is uniquely sensitive to the modern AI renaissance. As co-founder Traynor outlines in this interview, customer support is perhaps the most “target-rich” environment to apply generative models like GPT-4 – these technologies excel at talking, reasoning, and providing answers. Such conversational abilities have traditionally been the province of human agents. As their jobs change, the software they use must evolve, too.
Rather than deny disruption, Intercom seems hell-bent on meeting it. In March, Traynor’s firm debuted Fin, a customer service bot powered by GPT-4. Though in its early days, Fin is seeing “insane levels of demand,” according to the Intercom executive. It’s purportedly capable of resolving 50% of customer queries. Fin is just one part of Intercom’s “top to bottom” reinvention as an AI company, a shift that could power the firm to new heights but that introduces real risks and financial ramifications. Founders, operators, and investors interested in the AI revolution will find plenty of lessons in Traynor’s perspective.
Note: This conversation has been edited for readability and clarity.
Given Intercom’s product, I imagine you’ve been interested in AI for some time. What’s the company’s journey with AI been like?
We’ve been building with AI since 2018. We started with our “protocol resolution bot,” which answers conversations like Fin does today. The biggest difference is that you have to train it: you tell it which screenshot to use, which videos to surface, what part of the product to deep-link to. It’s still a wildly popular product for some of our customers, who can get to 60-70% with it. You’d call it old school today, but it was pretty bleeding-edge until last November.
ChatGPT has been a big shift. Unlike our product resolution bot, you don’t have to train it – it understands language from the very beginning. It’s also extremely good at the basics of a conversation. You can have a long, multi-threaded conversation with it. If you asked our old bot, “Do you have an Android app?” it could have answered that question. But when you think about going through question-and-answers and then asking a follow-up like “What about Android?” a different level of contextual understanding is needed.
The newer GPT models have made it a lot easier to inject AI in many more places throughout our product. Fin, our new GPT-4-powered bot, is an example of leveraging these new conversational capabilities.
How did Intercom think about implementing generative AI?
What worked well for us was to start by focusing on a relatively safe area of the business to test this technology out. I think other companies should follow that framework – look for a part of the business where expectations aren’t super, such that even if it’s only marginally useful, it’s still useful.
For Intercom, that meant starting on the support agent side – the people answering queries – not the customer side. We used new AI models to summarize conversations that agents could expand or shrink as they liked. It was useful but low risk. If the AI mis-summarized something – it’s ok. These early experiments helped us understand what was possible and laid the groundwork for Fin.
What was the vision for Fin?
We wanted to build a superior, more established version of our resolution bot that could hold all types of conversations but also had the capacity to get specific. We also needed to withhold hallucinations as much as possible – they’re quite dangerous for customer support. You can imagine a scenario where a customer asks an AI bot a question, the AI bot makes up an answer, and then the conversation disappears. In that case, the business may never know what happened, and the customer might go on to take all kinds of incorrect actions because of the bot’s bad advice. You really have to be careful.
It’s funny, but a lot of the work we did on Fin was getting it to not do things. The capabilities of a lot of AI models are great, but they have an inflated sense of their knowledge. We needed to tame that tendency so that Fin understood its confidence level when answering. Sometimes it should say, “I’m not sure, but here are two articles that might help,” other times, it should say, “I know the answer – here you go.” Most crucial was getting Fin to learn when to say, “I have no clue, I’m going to pass you over to a human agent and they’ll take it from here.” Most of the research we did focused on helping it make that hand-off.
GPT-4 unlocked a lot. The newer model gave us far better performance when it came to Fin understanding when it doesn’t have a good answer to give.
How did you set those boundaries for Fin?
Our competitors would love to know a few of the steps we take. I respect The Generalist’s reach, so I can’t share too much.
At a high level, we do a lot of classic machine learning to understand the queries Fin receives so that it knows when it should and shouldn’t be looking for an answer. We’re essentially setting the operational framework the LLM is living in, priming it with some context on the customer query, and then providing the anatomy of a potential answer.
How do you partner with OpenAI?
We’ve been aware of each other for a while. I knew OpenAI’s CTO Greg Brockman from his Stripe days – there’s always been a kinship between Stripe and Intercom. I think the folks at OpenAI were also aware of what we were doing with our resolution bot – it’s a natural application for AI in a target-rich environment.
The first feature we released based on OpenAI technology was in January, but we worked with predecessors of ChatGPT before that point. As OpenAI’s grown up over the last six months and started having commercial relationships, we’ve deepened our partnership. We spend a lot of money with them; we build a lot of proprietary technology that sits on top of their API. That’s why you’ll see us featured on their marketing website. I think we’re a pretty strong use case of what things you can build on top of OpenAI.
How do you think about working with foundation model companies?
We have to be aware of the proliferation of models. It’s not just OpenAI; there’s Claude, Cohere, LLaMA from Facebook, and others. We need to see how the space evolves. Will the technologies get verticalized where there are LLMs that are great at talking to customers and others that are simulating doctors, for example? We don’t know right now. It’s something the industry will have to work out.
It’s not impossible that customers end up having a preference for the language model they use. For example, today, Intercom has plenty of users relying on Twilio to power their text messaging. Those customers will tell Intercom, “Here’s my Twilio API key. Can you stick my messaging costs onto my Twilio tab, please?” In the future, users might have similar commercial relationships with OpenAI or Anthropic, and we’ll need to do our work around it.
Because of these kinds of possibilities, we’ve built an abstraction layer into the product so that we can enable multiple LLMs. Intercom has been all in on OpenAI so far, but the ground could move under our feet. We’re still just a few months into this technological breakthrough; having long-term, well-planned commercial agreements just isn’t possible right now.
What impact are you seeing for Fin?
We’re seeing insane levels of demand. It’s as high as it’s been for anything Intercom has ever had in the market – full-stop.
Why is that happening? The best way I can say it is: society is ready for a bot. ChatGPT and the advancements it represents have gotten people acquainted with the idea. Now, companies are realizing that this can be the frontline of their support, and it’s smart enough to escalate to humans when needed. The combination of automation plus humans is the pitch customers respond to the most.
The reduction in hallucinations is another big part of the momentum since it significantly reduces the potential for business damage. As a result, the vast majority of our support customers tell us it’s a no-brainer. They’re saying, “Yes, we need this,” especially since so many CX teams are under heavy stress.
Although it’s still early days for Fin, we’re seeing some significant differences between it and our previous resolution bot. Fin can address a much wider range of topics, for one thing. It also requires zero training – literally zero – versus the resolution bot, which required time and effort. Teams no longer have to go on a five-day offsite and plan their automation strategy and then set aside a week to get it up and running. Now they just flip the “on” button.
How does OpenAI’s pricing impact your product strategy?
Honestly, it is a new variable. It’s not something we’re used to in software – there’s basically never a time where you come up with a good idea and say, “We can’t afford to build it.” That’s just not a thing, right?
That changes when you’re building with AI. There are features we could build that we won’t because they’re too expensive. For example, we could use GPT-4 to summarize every conversation that every customer has with every business on Intercom. We could do that, but it would cost a lot of money because Intercom powers 500 million conversations a month. That’s a lot of API calls, right?
It requires a different kind of thinking for us. Just because a feature is a brilliant idea doesn’t mean it gets shipped. You have to think of what it might cost at Intercom scale.
How does it affect your cost structure?
It is a new concern for our margins though within the context of the broader industry, relying on an AI model isn’t prohibitively expensive. Depending on where a support agent is and how complex the issue they’re addressing is, it costs between $5 to $25 per conversation, and each agent is tasked with closing between 50 and 100 conversations a day. The cost of calling an API is a rounding error compared to a company’s fully loaded expenses.
My take on it is: yes, there might be an application for margin here, we don’t know what it is, and the ground is moving very fast. Second-order concerns will start bubbling up, where the different model providers begin undercutting each other and driving down the cost of usage. All of which is to say that I don’t think there’s a way to be particularly smart about cost right now. You just have to build things that are deeply valuable and be careful about shipping features that are cool but could incur massive costs at scale.
How do you think AI will impact your industry?
It’s turned our industry somewhat upside down. I think these large language models will reimagine the customer support landscape from the ground up. I can’t imagine a more target-rich environment. AI is really good at having conversations that converge around a certain set of facts – that’s what customer support is.
I’ve seen a few AI minimalists in the support world that talk down the implications of AI, and I couldn’t disagree more. Every support tool will need to reconsider every aspect of its functionality for a world in which the most common questions are taken care of. In months – not years – customer support teams will shift towards a “first and last” model: they’ll answer questions for the first time and the last. Because once you answer them, the bot learns and takes care of it from then onwards. How does that not massively change the industry?
Who will be the customer support winners in AI?
I think the winner will be the company with the most complete solution. And by that, I mean a product centered around AI but understands there will still be customer support teams. No one is credibly saying that bots will solve 100% of customer queries – we know there are more complicated issues. For example, If someone wants a refund for a subscription, the product maker might want to ask them why before issuing a refund. There will always be concerns that aren’t addressed by bots and don’t have answers in documentation. The battleground for the future will be designing products that enable teams to interoperate with AI.
What won’t work is bolting AI onto yesterday’s technology. And honestly, that’s where everyone goes at first. Every new generative thing you see is basically: “Oh, if you just type ‘dot-dot-dot’ it creates a random string of text – hallelujah!’” Those are just bolt-ons.
Intercom is AI top to bottom. There’s AI in the messenger when you write your query, it’s AI trying to answer, and it’s AI handing it over to a human if it can’t. When the human receives the conversation, it’s AI that has summarized it, and when that person goes to respond, it’s AI that suggests an article to inject. And then it’s AI that monitors operations and tells you what areas you need to work on. You need a clear thread throughout the product – just bolting on will not work.
What are the business risks of going all-in on AI?
Intercom is already a lot of the way there. The challenge is going to be articulating AI’s value and shortcomings to our customers. They need to learn how to feed it better, how to give it better context. It’s about helping customers get the best value out of these technologies and also understand what’s going on in their business.
Right now, you turn this thing on and say, “Well, I hope everything’s good!” You can watch thousands of conversations fly by, but that’s not an easy way of staying on top of things. Traditionally, a support leader might walk the floor and hear what’s happening from their team. Maybe there’s a big bug that’s leading to a spike or shipping’s delayed. We have to do a lot of work on our end to surface that kind of information – to say, “here’s what’s going on” and share that on behalf of the bot. That’s a whole new muscle we’ve never had to build.
The Generalist’s work is provided for informational purposes only and should not be construed as legal, business, investment, or tax advice. You should always do your own research and consult advisors on these subjects. Our work may feature entities in which Generalist Capital, LLC or the author has invested.