What to Watch in AI (November 2022)
Reid Hoffman, Saam Motamedi, Sarah Guo, Lan Xuezhao, Matt Turck, Leigh Marie Braswell, Nathan Benaich, Rob Toews, Cat Wu, and Michael Dempsey highlight the AI trends to keep an eye on.
Brought to you by Sardine
Scams only work if people believe they’re real.
Scams hide in plain sight. The email outlining an investment opportunity or the phone call promising paid work may seem like good fortune. But more likely than not, they’re attempts to defraud you.
The sophistication of these scams means that anyone is at risk. Last year, fraud losses increased 70%, suggesting that fraud is outstripping our technological defenses. It’s a problem that is constantly evolving. While people like me and you are at risk, so are companies. Businesses lose billions every year in scams, sometimes without realizing it since they’re often defrauded by “verified” customers.
Sardine protects businesses from such threats. Leveraging sophisticated machine learning, Sardine stops the scammers from winning. You can think of them as the world’s best fraud team that you hire as an API. Compliance, onboarding, transaction monitoring, and fraud detection are all handled through Sardine’s simple API and SDK.
The impact is major. Sardine helps businesses like FTX, Brex, Metamask, and Blockchain improve their conversion rates, raise order values, and lower fraud losses.
Scale without the scams. Implement Sardine today.
You can listen to an audio version of The Generalist on Spotify or Apple Podcasts.
Actionable insights
If you only have a few minutes to spare, here's what investors, operators, and founders should know about the most exciting AI trends.
Copilot for everything. AI is already streamlining illustration, writing, and coding. It may soon become an assistant for all knowledge workers. In the future, we may have versions of GitHub’s “Copilot” feature for lawyers, financial analysts, architects, and beyond.
Tracking value accrual. As AI startups often rely on publicly available models like GPT-3 or Codex, some question their defensibility. The fundamental question centers around value accrual. Will applications that leverage GPT-3 successfully capture value? Or will it accrue to the infrastructural layer?
Beyond words and images. GPT-3 and DALLE-2 have attracted deserved attention for their ability to automate text and image creation. The most impactful uses of AI may come from the life sciences, though. AI can be used to design better pharmaceuticals or run more efficient clinical trials.
Improving interfaces. Interactions with AI typically take the form of a basic text box in which a user enters a “prompt.” While simple to use, greater control may be needed to unlock the technology’s power. The challenge will be to enable this potential without introducing needless complexity. Applications will need smooth, creative interfaces to thrive.
Addressing the labor shortage. Skilled laborers are in short supply as society’s need increases. For example, while demand for skilled welders increases by 4% per year, supply declines by 7%. AI-powered robots may be part of the solution, automating welding, construction, and other manual tasks.
***
“This time is different.”
Sir John Templeton, the man named “stockpicker of the century” by Money magazine in 1999, referred to those as the “four most dangerous words in investing.”
It’s a good quip and a fair point. Markets are full of mirages, and circumstances that appear exceptional may show themselves to be mundane – one movement in a familiar, repetitive cycle.
Sometimes, though, things really are different. Sometimes, a tiny, promising glimmer produces a lasting flame. Sometimes, the world is genuinely changed.
The sentiment in venture capital is that we may be in the midst of such a moment when it comes to artificial intelligence (AI). The past year has seen a blossoming of new models and startups, along with increased public interest. While venture investing in the sector has slowed in line with the broader market pull-back, talk to VCs today about what they’re most excited by, and generative AI is often mentioned.
As ever, there’s a chance we look back on this period as a false dawn – the result of capital searching for heat amidst a cooldown. But that feels unlikely. My first venture job was in 2016 when every other pitch deck purported to have some AI advantage, and chatbots were seen as a UX evolution. Playing with DALLE-2, GPT-3, and Stable Diffusion feels decidedly different than that era, the equivalent of jumping from a pull-to-speak doll to a precocious toddler. AI is unlocking real creativity and real commercial value, producing novel images, plausible writing, and usable code. The sheer volume of innovation and experimentation often feels difficult to follow as improved models supersede predecessors, and startups identify new ways of leveraging them. The horizon of possibility looks distant one day, then jarringly close a few weeks later.
To better understand the state of the industry, I’ve asked ten thoughtful AI investors to share the trend they believe is worth watching. My hope is that it helps us (myself included) better identify areas of opportunity and topics worthy of further research.
A note on how these collaborations come together.
While investors know what other contributors are writing about and are encouraged to pick different subjects, I’ve found that some overlap is often interesting. Two investors might analyze a similar topic very differently, and there is value in their distinctions.
Additionally, I intentionally do not preclude investors from mentioning companies in which they have invested. Everything is a matter of trade-offs, and I believe the benefits outweigh the perceived costs. The downside of this approach is that investors may be seen as “talking their book.” Firstly, we select contributors that I consider thoughtful and reliable. Secondly, it’s more interesting to allow investors to pick the companies they know best and have studied most deeply. It also requires them to choose among favorites. Lastly, it demonstrates they have skin in the game, capitalizing their convictions.
With that outlined, let’s tumble down the AI rabbit hole together and learn how new technologies are impacting our minds, bodies, and machines.
Trend: The elevation of human work
Is there any profession as quintessentially right-brained as an “artist?” Or one as left-brained as a “programmer?”
What's been so remarkable to us about the rapid evolution that has characterized the last year, especially in the large language models, is how they’re now powering assistive tools that radically increase productivity, impact, and value across a wide range of professions.
For artists, we've got AI image-generation tools like OpenAI’s DALL-E, Midjourney, and many others. For programmers, we've got Microsoft's GitHub Copilot, which helps software developers write, test, and refine code in many of the most currently popular computer languages.
While some AI skeptics characterize large language models as brute-force prediction machines that won’t ever imbue computers with anything like human intelligence or consciousness, what we see, in mind-blowing practice, is how profoundly these kinds of AI tools are already beginning to enhance human flourishing.
What Copilot does for developers and DALL-E does for visual creatives of all kinds is reduce or eliminate rote, time-consuming, but still crucial aspects of their jobs. Of course, this dynamic is hardly unique to software developers and artists. Large language models are trained on massive quantities of text data, then incorporate what they “learn” to generate statistically probable (contextually sensible) output to user-supplied prompts. So while Github Copilot was trained by ingesting massive quantities of computer code, different versions of Copilot are equally possible for virtually any profession.
A Copilot for attorneys, for example, could help them draft contracts, motions, briefs, and other legal documents based on natural language queries, previous cases, and best practices. It could also suggest relevant precedents, statutes, and citations, or flag potential errors, inconsistencies, or risks in existing documents.
A Copilot for architects could help them design, model, and optimize their buildings and structures based on their specifications, constraints, and objectives. It could also generate interactive visualizations and help scope out the environmental, social, and economic impacts of projects.
Imagine a world where millions of professionals across thousands of industries use domain-specific versions of Copilot to soar faster and higher to new levels of productivity, accuracy, and creativity. A world where professionals across all industries can use general-purpose tools like our portfolio company Adept's Action Transformer to harness the power of every app, API, or software program ever written via interfaces that allow them to describe the tasks they want to accomplish in plain language.
In dystopian visions of the future, technology in general and AI in particular are often characterized as forces that will lead to an even more polarized world of haves and have-nots, with the bulk of humanity being disenfranchised, marginalized, and immiserated by machines.
In the world we actually see evolving today, new AI tools effectively democratize facility and efficiency in unprecedented ways. In doing so, they’re empowering individual professionals to achieve new productivity levels and society to achieve gains that may exceed those unleashed by the Industrial Revolution. Not only that, but people will also find their jobs more engaging and fulfilling because they’ll have more time to focus on the most creative, strategic, and novel aspects of them.
This future is here. There will be an AI amplifying tool for every major profession within five years. These tools can catalyze human excellence across occupations – right brain, left brain, and any brain.
– Reid Hoffman, cofounder at Greylock, and Saam Motamedi, partner at Greylock
Trend: Generative AI and life sciences
It’s been another hot summer in AI. We’ve seen the rise of new research collectives that open-sourced breakthrough AI models developed by large centralized labs at a never before seen pace. While these text-to-image/video models offer viral consumer-grade products that capture our imagination, the most impactful applications of these models are unlikely to be their first-order effect. I believe the place to build is at the intersection of AI and science, specifically in the life sciences.
Today’s scientific method is firmly rooted in data-driven experimentation. The resolution and scale of the data we can generate to explain biological systems are continually improving while develop AI model architectures capable of modeling human language, natural images, or social network graphs. These architectures can be directly transferred into modeling proteins’ language, cells’ images, or chemical molecule graphs. This uncanny generalization ability is now unlocking breakthroughs in protein structure prediction and drug molecule design. AI is driving a new generation of technology-driven biotech companies (“TechBio”) attacking the trillion-dollar pharmaceutical industry to deliver improved medicines faster and at a lower cost.
With Air Street Capital, I have invested heavily in companies driving this industry forward. One of the companies I’ve backed is Valence Discovery, which develops generative design methods to create new classes of potent drug molecules previously out of reach due to the requisite design complexity. Valence is pursuing ultra-large generative chemistry initiatives with leading research institutions to push the boundaries of today’s generative AI methods for drug design.
One founder in this space is Ali Madani, who led an AI for protein engineering moonshot called ProGen at Salesforce Research. There he developed large language models specifically applied to designing brand-new artificial proteins that recapitulated or even outperformed the function of their naturally occurring peers. The group produced the first 3D crystal structure of an AI-generated protein. Proteins are the functional actuators of all life, and the possibilities a technology like this might unlock are vast.
– Nathan Benaich, General Partner at Air Street Capital
Trend: Collaborative interfaces
Large language models (LLMs) are among the most powerful tools we've ever seen. We are still collectively testing the bounds of instruction for these models. Clever prompt engineering has quickly become the sport-du-jour for nerds. (The “let's think step by step” prompt almost comically enhances model reasoning abilities.)
But natural language is not a panacea – we are still issuing commands blindly, without a manual. There are no guiding, coherent abstractions in prompting, no obvious maps to navigate a model's “latent space,” just lots of trial-and-error and clever tricks.
Startups that have begun to figure out UX simplifications for narrow use cases have reaped returns. One example is Jasper’s templated prompts for producing marketing copy. We are in the early days here, as illustrated by the fact that most language model products expose the opaque concepts of “sampling steps” and “seeds” to users.
Everyone with internet access will very soon be indirectly using large language models in daily tasks. At a minimum, search will be disrupted unrecognizably, delivering answers and summaries on demand. We should also see LLM-based tools designed for more mastery and deeper interaction. Creatives already want generated images to be manipulable in structure, and workers want trustworthy output without hallucinations. Many might like their AI assistants to be educated with specific knowledge. These are the sophisticated “bicycles for the mind” that will unlock productivity for knowledge workers.
Ilya Sutskever dismissed “prompting” as a transitory term that’s relevant only thanks to flaws in our models. I expect he is right (given he usually is) and that our models will be increasingly able to understand intent. But a fundamental problem is that human intent is not always deterministic; it is often iterative, exploratory. As models engage in more complex tasks that require this sort of thinking, my hunch is that understanding workflow and enabling more control and feedback tailored to that workflow will be vital to creating end-user value. Early ideas in improving UX include templates, UIs for choosing amongst generations, the ability to add more constraints, controls over context length, intermediate controls in chained processes, and exposing the “thought process” of models.
Some entrepreneurs and investors have despaired at whether there is business value to be built around someone else’s models, but we are only beginning to understand how to interact with AIs. There is likely to be variability around domain, and researchers are unlikely to address the needs of every user persona. Will the only interface to these powerful models forever be a simple, static text box? I think not – and therein lies a product opportunity.
– Sarah Guo, founder of Conviction
Trend: AI video creation
Generative AI is all the rage right now, and with good reason, as it’s certainly very exciting. Technology prowess aside, the usual business questions apply: can you build a product that solves a problem 10x better with generative AI than you could otherwise? Can you build a defensible competitive advantage over time?
In my (biased) opinion, the video creation platform Synthesia is an excellent example of how to build an exciting business on top of generative AI. With Synthesia, a user types a few lines of text, clicks a couple of buttons, and voilá! A professional video pops up, with a human avatar narrating the text (in up to 60 languages, mind you) within minutes.
Synthesia is used for various enterprise use cases, with particular traction around onboarding and training. For many customers, the alternative has historically been to send long PDFs that very few read or spend hundreds of thousands of dollars on creating professional videos using actors, directors, cameras, and post-production.
Using generative AI, Synthesia dramatically reduces the effort, time, and money required to create a business video, perhaps by 100x, and empowers anyone to do it. In addition, it’s built its own proprietary AI technology – two of its co-founders are AI professors, and it has a strong in-house research group. While it already leverages large language models, the company avoids the platform dependency that startups building applications directly on GPT-3 will face sooner or later, paving the way for a long-term defensible competitive advantage and category leadership.
– Matt Turck, Managing Director at FirstMark
Trend: Automated code generation and app development
The pace of progress in modern machine learning (ML) has always seemed fast; a deep-learning model won the most popular computer vision competition for the first time only a decade ago. However, when GitHub released their “AI pair programmer” Copilot product in late 2021, many people (even some working in ML, like myself!) were shocked that today’s deep learning models could already autocomplete code for highly skilled software developers. Inside the interface developers use to code, Copilot suggests how a line of code could be finished and even generates multiple lines of code from a plaintext description of what that code should do. Some engineers using the first version of Copilot claim it saves them hours every day or even writes 40% of their code.
Copilot is built using the large language model (LLM) of OpenAI’s Codex, which translates natural language into many popular programming languages and was trained using tens of millions of public GitHub code repositories. For context, OpenAI is a San Francisco-based artificial intelligence research company; it was founded as a non-profit in 2015, made itself for-profit in 2019, and then raised $1 billion from Microsoft (who acquired GitHub in 2018) to fund its research. In return, Microsoft gained exclusive access to some of OpenAI’s LLMs, including Codex.
Ultimately, Copilot provides convincing proof that current ML capabilities can automate an increasing amount of code generation and application development. Newly-created startups and well-established companies have started addressing multiple parts of the product-building experience, including automated code reviews, code quality improvements, shell command autocomplete, documentation, and even frontend and website generation.
An example of an early-stage startup* building in this space is Grit. Grit completes the most dreaded engineering tasks at any company, commonly called “tech debt.” This debt is accumulated when developers take coding shortcuts to launch features more quickly, sacrificing long-term reliability and performance. Grit’s product acts as an automated developer that fixes many common issues and improves with human feedback on the suggested code changes. By combining static analysis with LLMs, Grit’s vision is to create self-maintaining software.
Given the potential of this technology to revolutionize software development, multiple investors have compiled relevant company lists, and numerous other startups are building in stealth. Some of these startups build on the Codex API and aim to develop differentiation through unique product experiences and proprietary data flywheels. Others are building their own models from scratch or fine-tuning open-source models. As these companies mature, it will become more obvious where the majority of the value will accrue, either to the AI infrastructure providers or the AI applications themselves.
* Disclaimer: Founders Fund and the author are investors in Grit.
– Leigh Marie Braswell, Principal at Founders Fund
Trend: Digital twins in clinical trials
Artificial intelligence will transform how we use pharmaceuticals to treat human illness.
When people think of AI and pharma, the application that most often jumps to mind is AI for drug discovery. (For good reason: AI-driven drug discovery holds tremendous potential.)
But there is another compelling machine learning use case that, while less widely covered (and less zealously funded), promises to bring life-changing therapeutics to market faster and more effectively for millions of patients. This is the use of digital twins in clinical trials.
It is well-documented how inefficient and expensive clinical trials are today, with the average new drug requiring over a decade and $2 billion to bring to market. Recruiting trial participants is one major stumbling block in shepherding a drug through clinical trials. A single trial requires recruiting hundreds or thousands of volunteers to populate its experimental and control arms. This has become a significant bottleneck. Eighty percent of clinical trials experience enrollment-related delays, with trial sponsors losing up to $8 million in potential revenue per day that a trial is delayed. Hundreds of clinical trials are terminated each year due to insufficient patient enrollment; indeed, this is the number one reason that clinical trials get terminated.
“Digital twins” offer a transformative solution to this challenge. The basic concept is simple: generative machine learning models can simulate placebo outcomes for patients in clinical trials. This can be done at the individual patient level: a digital twin can be created for each human trial participant in the experimental arm of a trial, simulating how that individual would have performed had they instead been in the control arm.
Crucially, this means that pharmaceutical companies need to recruit significantly fewer human participants because much of the control arm patient population can be replaced by digital twins. This makes clinical trials significantly faster and cheaper, enabling life-changing therapeutics to more quickly come to market and reach millions of patients in need.
San Francisco-based Unlearn is one AI startup at the forefront of this transformative technology. Unlearn is currently working with some of the world’s largest pharma companies, including Merck KGaA, which is deploying the startup’s digital twin technology to accelerate its clinical trials. Earlier this year, the European Medical Agency (Europe’s version of the FDA) officially signed off on Unlearn’s technology for use in clinical trials, major regulatory validation that the technology is ready to be deployed at broad scale.
A few years from now, expect it to be standard practice for pharmaceutical and biotechnology companies to incorporate digital twins as part of their clinical trial protocols to streamline a therapeutic’s path to market.
It’s worth noting that digital twins for clinical trials represent a compelling example of generative AI, though it has nothing to do with buzzy text-to-image models. Producing simulated placebo outcomes for individual patients is an excellent example of how generative machine learning models can have a massive real-world impact – and create billions of dollars of value.
* Disclaimer: The author is a Partner at Radical Ventures, an investor in Unlearn.
– Rob Toews, partner at Radical Ventures
Trend: Come for the workflows, stay for the personalization
As more and more users interact with generative AI models, we are gaining a deeper understanding of the problems most immediately addressable by AI: ones where we have lots of training data already; where getting the correct answer 99% of the time is very useful, and the incorrect 1% won’t be disastrous; and where the underlying models can continually ingest human feedback and become better over time. As AI crosses the chasm into the mainstream, intuitive workflows will drive massive adoption, allowing those less familiar with AI to start seeing value quickly.
In the next generation of AI startups, the best products will be created by founders who focus on workflow design and fine-tuning models based on user feedback.
Two categories of startups that fit the mold are AI agents and AI-augmented SaaS. AI agents will accomplish repetitive knowledge work — whether that’s being a lawyer, engineer, accountant, or doctor. AI-augmented SaaS will depend on an AI layer to get more value from existing workflows — for example, adding transcription and summarization to a platform that already collects audio data or adding a language interface to streamline SaaS apps. In both cases, a human will still supervise to guarantee output quality. The user will give positive and negative feedback, which will be captured and used to tune the model.
The founders who win will design interfaces and workflows that give users high levels of control and low cognitive overhead by innovating on top of the current prompting and auto-complete modalities. These workflows will accelerate common use cases with templates or specialized composable models while ensuring “break-glass” options are available for uncommon edge cases. The user won’t have to understand how the model works or shape themselves to it. And as the user interacts with the product, the data generated by accepted answers automatically feeds back into the data flywheel that drives personalization and retention.
These startups will focus on their core competencies and leave the development of general AI models to research labs and the open-source community, which has released very capable models. We already see text-to-image models like Stable Diffusion, audio transcription models like Whisper, and language models such as GPT-J and GPT-Neo. Startups will leverage the latest advances in AI research by swapping in new models as they become available and fine-tuning based on historical proprietary user feedback. The limitation today is product designers focused on interfaces that make it easy for the non-AI-aware consumer to engage and quickly get value from the models. Moats will be in the comprehensive workflows and data collected as users engage with these models, which will inform more powerful future models.
– Cat Wu, Partner at Index Ventures
Trend: Chained prompts and multi-step automation
Now that many have experienced the power of prompt interfaces and the variety of creative and utilitarian use cases that large language models (LLMs) can bring, it’s time to go a layer deeper. The beauty of prompt interfaces and LLMs as a user is they can interpret intent and result in actions. Thus far, we’ve primarily seen single-player experiences resulting in very specific or one-to-one task achievements like image generation, text completion, and more. Next, we will see people build the infrastructure for chaining prompts together, allowing us to achieve multi-step actions (via LLMs and/or eventually interacting with APIs) and unlock massive ROI.
This has a variety of implications across both consumer and enterprise use cases. While some are low-hanging simple use cases that may not be venture scale (“make me a reservation at a quiet and romantic Asian restaurant in south Brooklyn for two between 6-8 pm on Tuesday”), others can provide material leverage to tasks across operations, procurement, data analysis, and more.
It’s likely that on the consumer side, we’ll see each of these use cases as wedges for more horizontal plays. On the enterprise side, these actions could be honed to be more industry or context-specific, with difficult-to-integrate or proprietary data pipes playing a larger role.
The concept of chaining together prompts and different types of models (not always needed) also helps get around some of the existing limitations today of more API-driven models like GPT-3, where the memory of the model is fairly static as you can query information, summarize it, and then utilize the result however you’d like. An example of this would be a use case like creating new search engines, as was done recently with webGPT.
Overall, we’ve just begun to scratch the surface of how chained prompts or sequential actions enabled by the former could lead to more advanced workflows, new types of software products, and new paradigms of interfaces for common problems with the help of AI and likely (some) human ingenuity.
– Michael Dempsey, Managing Partner at Compound
Trend: Industrial automation that solves real-world challenges
We’ve all read about the most recent breakthroughs in AI models, such as DALL-E 2, GPT-3, and so on. Businesses are being reimagined because of these innovations, with some developers and designers worrying about their jobs. At the same time, we are facing among the most severe labor shortages of our lifetime outside of our offices, in places like restaurants, construction sites, and factories.
As an example, the average age of a welder is 55 years old. Every year, there’s a 7% decrease in skilled human welders, while the demand for these welders increases by 4%. By 2024, there will be 400,000 human welder vacancies in the U.S. alone.
This is why companies like Path Robotics that automate skilled labor are so critical. Path enables companies to use off-the-shelf robots to autonomously weld novel parts without requiring time-consuming and expensive reprogramming. Today, Path enables autonomous welding for customers for all sorts of metal applications, from electrical poles to hydraulic fuel tanks to mufflers. With its software, robots can learn to weld visually instead of being taught how to weld with code. This allows the company to improve performance over time with visual QA inspections. In the future, Path’s core technology is applicable to many other manufacturing tasks.
Similarly, things get even worse when labor shortages are combined with supply chain challenges, as seen in the construction industry right now. This painful combination has made it impossible for home or commercial property owners to complete construction projects on time. Ergeon is automating the entire construction process of fence building using advanced AI to enable remote measurement and automate design, quoting, and more. This technology allows the company to complete its projects 10x faster than typical contractors. They’ve built among the world’s largest home construction databases and empower anyone to build.
Though not often discussed, many $100 billion businesses will be built outside our offices. It’s a once-in-a-lifetime opportunity to reimagine the world beyond our desks and computers, and we couldn’t be more excited about it.
* Disclaimer: Basis Set is an investor in Path and Ergeon.
– Lan Xuezhao, Founder at Basis Set
The Generalist’s work is provided for informational purposes only and should not be construed as legal, business, investment, or tax advice. You should always do your own research and consult advisors on these subjects. Our work may feature entities in which Generalist Capital, LLC or the author has invested.