Published November 12, 2022
It was the summer of 1994. I was working on developing an image processing software for use in hospitals, metallurgies, research labs, and many other places. The software (called Image Workbench) could take an image from a digital camera, X-ray machine, microscope or any other such source. Various utilities were available in the software to enhance the image to get better insights. One of the utilities (called “blob counter”) could even identify the number of objects in the image. The software was hard-coded – in the sense there was no (what we call now) machine learning (ML). It was then that I was captivated by the idea of neural networks. Could they use artificial intelligence (AI) to derive more insights by learning through a library of images? I had even got a couple of IIT-Bombay Computer Science undergraduate interns to work on the problem.
That was my first brush with AI-ML. It was too early. The computational power was simply not there to do the processing at scale. For the 25-odd years since then, AI-ML seemed to be in the hype cycle phase. Every year seemed to be the “Year of AI”. And then almost suddenly everything changed. As cloud technologies got cheaper (both for storage and computation), AI has come into its own. In the past year, we are seeing rapid advances in a technology called “generative AI” where a few words are all that is needed to deliver an essay or an artwork, and even an audio or video. AI is now finally coming into its own.
My lens in this series is that of a marketer creating and deploying email campaigns. How can AI transform email marketing? There is plenty of friction at each stage of the email creation and delivery process. Let’s view the marketer’s journey as “email engineering.” Emails need to be crafted – which means subject lines, images, text and landing pages. Segments have to be selected – who should the email campaign go out to. Send to too many inactives (those who haven’t opened an email from the brand for 90 or 180 days) and the domain reputation could be hurt. Send to too few and the outcome would be sub-optimal. Then there is the question of time: when to send out the email.
I counted 8 elements of an email where an email marketer needs to make decisions: subject line, header, body (images and text), footer, landing page, segment, time, and delivery. This is for broadcast emails where the same email is sent out to large numbers. Then there are the triggered emails which are sent out when an event happens during a customer journey. For example, if an item is left unpurchased in a shopping cart, then the ‘cart abandonment’ email gets triggered. Finally, there are the transaction emails – sent out post-purchase. Could these be personalised to each individual? Could these help start the journey for the next purchase?
There is so much to email engineering. I have written previously about Email 2.0 and how AMP and Atomic Rewards promise a new dawn by combining interactivity and incentives to create hotlines and drive 10X conversion actions, and also about the possibility of ads and how they could result in free emails. It is now time to consider the impact AI can have on emails. The one common theme across all these innovations is the drive towards “frictionless hotlines.”
Let’s begin by understanding what AI is.
Wikipedia provides an overview:
Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed to the natural intelligence displayed by animals and humans. AI research has been defined as the field of study of intelligent agents, which refers to any system that perceives its environment and takes actions that maximize its chance of achieving its goals.
The term “artificial intelligence” had previously been used to describe machines that mimic and display “human” cognitive skills that are associated with the human mind, such as “learning” and “problem-solving”. This definition has since been rejected by major AI researchers who now describe AI in terms of rationality and acting rationally, which does not limit how intelligence can be articulated.
AI applications include advanced web search engines (e.g., Google), recommendation systems (used by YouTube, Amazon and Netflix), understanding human speech (such as Siri and Alexa), self-driving cars (e.g., Tesla), automated decision-making and competing at the highest level in strategic game systems (such as chess and Go). As machines become increasingly capable, tasks considered to require “intelligence” are often removed from the definition of AI, a phenomenon known as the AI effect. For instance, optical character recognition is frequently excluded from things considered to be AI, having become a routine technology.
Artificial intelligence was founded as an academic discipline in 1956…In the first decades of the 21st century, highly mathematical-statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.
Britannica adds: “Despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match human flexibility over wider domains or in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis, computer search engines, and voice or handwriting recognition…Research in AI has focused chiefly on the following components of intelligence: learning, reasoning, problem solving, perception, and using language.”
From Investopedia: “Artificial intelligence is based on the principle that human intelligence can be defined in a way that a machine can easily mimic it and execute tasks, from the most simple to those that are even more complex. The goals of artificial intelligence include mimicking human cognitive activity. Researchers and developers in the field are making surprisingly rapid strides in mimicking activities such as learning, reasoning, and perception, to the extent that these can be concretely defined.”
More from IBM: “At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving. It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms which seek to create expert systems which make predictions or classifications based on input data.”
A 2021 Netcore report on AI in Email Marketing provided an overview of various AI techniques:
Machine Learning: Using example data or experience to refine how computers make predictions or perform a task. Or, methodologies (how-to processes and guides) on how artificial intelligence would do these things.
Deep Learning: A machine learning technique in which data is filtered through self- adjusting networks of math loosely inspired by neurons in the brain. Or, using data and applying math that was inspired by how we think as humans.
Supervised Learning: Showing software labeled example data, such as photographs, to teach a computer what to do. Or, show examples to the computer to teach it how to do something.
Unsupervised Learning: Learning without annotated examples, just from experience of data or the world—trivial for humans but not generally practical for machines. Yet. Or, learning by applying what they know, which humans do every day, but machines haven’t yet needed to or been able to do this.
Reinforcement Learning: Software that experiments with different actions to figure out how to maximize a virtual reward, such as scoring points in a game. Or, adding a layer of ability to software to take actions/activities and calculate a logical reward.
Artificial General Intelligence: As yet non-existent software that displays a humanlike ability to adapt to different environments and tasks, and transfer knowledge between them. Or, creation of software that can think like humans – take information and apply it based on what it is doing, when and where.
This graphic from Wikipedia helps place AI, ML and DL in context:
In machine learning, machines learn how to discriminate according to two essential paradigms: supervised learning and unsupervised learning.
In the first case, the algorithm needs a set of classified data on which to train itself, so as to be able (hopefully) to discriminate new and unclassified data with good precision and accuracy. Examples of supervised learning are interpolation and regression algorithms, classical neural networks, etc. This paradigm has been successfully applied to computer vision, natural language processing, time series forecasting, etc.
Unsupervised learning does not need classified data to work. The training set is just a heap of unstructured data in which the algorithm tries to let emerge an “innermost” and hidden order. Examples of unsupervised learning are clustering algorithms, dimensionality reduction and feature extraction algorithms, self-organising maps, etc.
Here is a taxonomy of various AI methods:
We will focus on two methods going forward: Deep Learning and Generative AI.
Deep Learning has been one of the key fields in AI. Here’s more from an IBM explainer:
Deep learning is a subset of machine learning, which is essentially a neural network with three or more layers. These neural networks attempt to simulate the behavior of the human brain—albeit far from matching its ability—allowing it to “learn” from large amounts of data. While a neural network with a single layer can still make approximate predictions, additional hidden layers can help to optimize and refine for accuracy.
Deep learning drives many artificial intelligence (AI) applications and services that improve automation, performing analytical and physical tasks without human intervention. Deep learning technology lies behind everyday products and services (such as digital assistants, voice-enabled TV remotes, and credit card fraud detection) as well as emerging technologies (such as self-driving cars).
So, what’s the difference between machine learning and deep learning? The article continues:
Deep learning distinguishes itself from classical machine learning by the type of data that it works with and the methods in which it learns.
Machine learning algorithms leverage structured, labeled data to make predictions—meaning that specific features are defined from the input data for the model and organized into tables. This doesn’t necessarily mean that it doesn’t use unstructured data; it just means that if it does, it generally goes through some pre-processing to organize it into a structured format.
Deep learning eliminates some of data pre-processing that is typically involved with machine learning. These algorithms can ingest and process unstructured data, like text and images, and it automates feature extraction, removing some of the dependency on human experts. For example, let’s say that we had a set of photos of different pets, and we wanted to categorize by “cat”, “dog”, “hamster”, et cetera. Deep learning algorithms can determine which features (e.g. ears) are most important to distinguish each animal from another. In machine learning, this hierarchy of features is established manually by a human expert.
Deep learning is a machine learning technique that teaches computers to do what comes naturally to humans: learn by example. Deep learning is a key technology behind driverless cars, enabling them to recognize a stop sign, or to distinguish a pedestrian from a lamppost. It is the key to voice control in consumer devices like phones, tablets, TVs, and hands-free speakers. Deep learning is getting lots of attention lately and for good reason. It’s achieving results that were not possible before.
In deep learning, a computer model learns to perform classification tasks directly from images, text, or sound. Deep learning models can achieve state-of-the-art accuracy, sometimes exceeding human-level performance. Models are trained by using a large set of labeled data and neural network architectures that contain many layers.
While deep learning was first theorized in the 1980s, there are two main reasons it has only recently become useful:
- Deep learning requires large amounts of labeled data. For example, driverless car development requires millions of images and thousands of hours of video.
- Deep learning requires substantial computing power. High-performance GPUs have a parallel architecture that is efficient for deep learning. When combined with clusters or cloud computing, this enables development teams to reduce training time for a deep learning network from weeks to hours or less.
…Most deep learning methods use neural network architectures, which is why deep learning models are often referred to as deep neural networks.
Iqbal Sarker: “Deep learning (DL), a branch of machine learning (ML) and artificial intelligence (AI) is nowadays considered as a core technology of today’s Fourth Industrial Revolution (4IR or Industry 4.0). Due to its learning capabilities from data, DL technology originated from artificial neural network (ANN), has become a hot topic in the context of computing, and is widely applied in various application areas like healthcare, visual recognition, cybersecurity, and many more.”
The neural networks that I was exploring in 1994 are finally being put to good use!
One of the most exciting new AI spaces is “Generative AI.” Sequoia Capital has more:
Humans are good at analyzing things. Machines are even better. Machines can analyze a set of data and find patterns in it for a multitude of use cases, whether it’s fraud or spam detection, forecasting the ETA of your delivery or predicting which TikTok video to show you next. They are getting smarter at these tasks. This is called “Analytical AI,” or traditional AI.
But humans are not only good at analyzing things—we are also good at creating. We write poetry, design products, make games and crank out code. Up until recently, machines had no chance of competing with humans at creative work—they were relegated to analysis and rote cognitive labor. But machines are just starting to get good at creating sensical and beautiful things. This new category is called “Generative AI,” meaning the machine is generating something new rather than analyzing something that already exists.
Generative AI is well on the way to becoming not just faster and cheaper, but better in some cases than what humans create by hand. Every industry that requires humans to create original work—from social media to gaming, advertising to architecture, coding to graphic design, product design to law, marketing to sales—is up for reinvention. Certain functions may be completely replaced by generative AI, while others are more likely to thrive from a tight iterative creative cycle between human and machine—but generative AI should unlock better, faster and cheaper creation across a wide range of end markets. The dream is that generative AI brings the marginal cost of creation and knowledge work down towards zero, generating vast labor productivity and economic value—and commensurate market cap.
The fields that generative AI addresses—knowledge work and creative work—comprise billions of workers. Generative AI can make these workers at least 10% more efficient and/or creative: they become not only faster and more efficient, but more capable than before.
We have heard of or experienced some of the Generative AI tools: OpenAI GPT-3 for text, Dall-E, Midjourney and Stability Diffusion for images, and Meta’s Make-A-Video.
From AImultiple: “Generative AI leverages AI and machine learning algorithms to enable machines to generate artificial content such as text, images, audio and video content based on its training data, in a manner which tricks the user into believing the content is real. Generative AI is the technology to create new content by utilizing existing text, audio files, or images. With generative AI, computers detect the underlying pattern related to the input and produce similar content.”
Techopedia adds: “Generative AI is a broad label that’s used to describe any type of artificial intelligence that uses unsupervised learning algorithms to create new digital images, video, audio, text or code. Until recently, most AI learning models have been characterized as being discriminatory. The purpose of a discriminatory learning algorithm is to use what’s learned during training to make a decision about new input. In contrast, the purpose of a generative AI model is to generate synthetic data that can pass a Turing Test. Because generative AI requires more processing power than discriminative AI, it is more expensive to implement.”
It gives an example: “Suppose you have the task of putting together an insurance brochure. You have a list of policies and costs, and benefits and details. The traditional way this would work is that a human writer would take a look at all of that raw data, and then take notes and write something in a narrative form that explains to the reader what each of these things is. With generative AI, the program can review the raw data, fashion a narrative around it, and create something that’s readable for a human reader, without a human writer being directly involved.”
Generative AI has exploded now. More from Sequoia: “Compute gets cheaper. New techniques, like diffusion models, shrink down the costs required to train and run inference. The research community continues to develop better algorithms and larger models. Developer access expands from closed beta to open beta, or in some cases, open source.”
10XDS adds: “Generative AI uses AI and machine learning algorithms to enable machines to generate artificial yet new content. For this, the technology uses existing text, audio files, videos, or images. The end result is a totally new content that tricks the user into believing the content is real. With generative AI, computers identify the underlying pattern related to the input and produce similar content. Various techniques like Generative adversarial networks (GANs), Transformers (GPT-3, LaMDA, Wu-Dao) are used for the purpose. Generating new content based on source data, differentiating, and identifying which generated data is closer to the original are few of the key activities that happen.”
Sequoia’s article (which was co-written with GPT-3) gives a glimpse into the future:
Generative AI is what can simplify the life of email marketers.
I used OpenAI’s GPT-3 with the prompt: “Write an essay on Generative AI”. This is what I got:
Generative AI is a form of artificial intelligence that is concerned with the generation of new content. This can be in the form of new text, new images, or new data.
Generative AI is used in many different applications, such as text generation, image generation, and data generation. There are many different methods that can be used for generative AI.
One popular method is called a generative adversarial network, or GAN. This method uses two neural networks, one that generates content and one that evaluates it. The generator network creates new content, while the evaluation network tries to distinguish between the real and generated content. The two networks work together to improve the quality of the generated content.
Another popular method for generative AI is called a Variational Autoencoder, or VAE. This method uses a single neural network to generate new content. The network is trained to reconstruct data from a latent space. The latent space is a space that contains all of the information about the data, but is not directly observable. The VAE can generate new data by sampling from the latent space.
There are many different applications for generative AI. One application is text generation. Generative AI can be used to generate new articles, or to create new versions of existing articles. This can be used to create new versions of Wikipedia articles, or to generate summaries of news articles.
Another application for generative AI is image generation. Generative AI can be used to generate new images, or to edit existing images. This can be used for creating photorealistic images, or for creating images with desired properties. For example, generative AI can be used to create images of people that don’t exist, or to create images of buildings that haven’t been built yet.
Generative AI can also be used for data generation. This can be used to create synthetic data sets, or to fill in missing values in data sets. This can be used to generate new customer data for a business, or to generate new medical data for research.
Generative AI is a powerful tool that can be used for many different applications. It is important to understand how generative AI works, and how it can be used, in order to make the most of it.
A second try with the same prompt gave the following:
AI In recent years, there has been a growing interest in artificial intelligence (AI) that can generate new things, rather than simply recognize or imitate existing things. This new type of AI is often referred to as generative AI.
Generative AI is based on the idea of artificial general intelligence (AGI), which is AI that can understand or learn any intellectual task that a human being can. While traditional AI is good at narrow tasks such as playing chess or driving a car, it is not yet capable of more general tasks such as understanding natural language or making common-sense decisions.
AGI is still largely theoretical, but there are some promising approaches to building it. One popular approach is to use neural networks, which are computer systems that are designed to mimic the structure and function of the human brain.
Neural networks are good at learning from data, and they have been used to create systems that can generate new images, text, and music. These systems are still limited in their ability to generate truly original content, but they are getting better all the time.
As generative AI gets better at creating new things, it will have a profound impact on many aspects of our lives. For example, it could be used to create new medicines, design new buildings, or even write new works of fiction.
Generative AI is still in its early stages, but it has the potential to change the world in ways that we can only imagine.
A more specific prompt “Write an essay on Generative AI use for email marketing” delivered the following:
With the advent of generative AI, email marketing is set to become more personalized and effective than ever before. By harnessing the power of artificial intelligence, businesses will be able to automatically generate custom emails for each individual customer based on their past behavior and preferences. This will allow businesses to send highly targeted and relevant emails that are more likely to result in a conversion. Additionally, generative AI can help businesses to automatically segment customers into different groups based on their interests and needs, making it easier to target them with the most relevant content.
It gets even better. I gave the prompt “How can AI be used by email marketers?” and got the following:
There are a few ways that AI can be used by email marketers:
Automated email marketing: Email marketing platforms like HubSpot and SendGrid offer AI-powered features that can automate the process of sending emails to subscribers based on their interactions with your website or app.
Personalized email content: AI can be used to analyze a subscriber’s past behavior in order to send them more personalized email content that is relevant to their interests.
Automated segmentation: AI can be used to automatically segment your email list into smaller groups based on factors like demographics, interests, or past behavior. This allows you to send more targeted email campaigns to each segment, which can lead to higher conversion rates.
Automated A/B testing: AI can be used to automatically test different versions of your email campaigns (known as A/B testing) in order to determine which one performs the best. This can help you optimize your email campaigns for higher conversion rates.
“What is the difference between deep learning and Generative AI?” returned the following:
Deep learning is a branch of machine learning that is concerned with algorithms inspired by the structure and function of the brain called artificial neural networks.
Generative AI is a branch of artificial intelligence that is concerned with creating things, such as images or text, from scratch.
I then wore the hat of an email marketer giving the prompt: “Write an email to sell iPhone 14.”
If you’re in the market for a new iPhone, the latest iPhone 14 is now available. This latest model features a new design, updated features, and is available in a variety of colors.
Here are some of the key features of the iPhone 14:
– A14 Bionic chip with next-generation Neural Engine
– Pro camera system with Ultra Wide, Wide, and Telephoto cameras
– 5G connectivity
– LiDAR Scanner
– OLED display
If you’re interested in purchasing the iPhone 14, please reply to this email and we’ll be happy to assist you. Thank you for your time.
Next prompt: “Write an email to pitch a vacation in Goa.”
If you’re looking for a sunny and relaxing vacation, then look no further than Goa, India! With its beautiful beaches, stunning scenery, and rich culture, Goa is the perfect place to unwind and enjoy some well-deserved rest and relaxation.
Here are just a few of the many reasons why Goa is the perfect vacation destination:
-The beaches are absolutely breathtaking and offer a wide range of activities, from swimming and sunbathing to windsurfing and parasailing.
-The food is incredible, with a wide variety of both Indian and international cuisine on offer.
-There is a rich culture to explore, with a fascinating history dating back hundreds of years.
-The people are warm and friendly, and you’re sure to feel right at home in Goa.
So what are you waiting for? Book your ticket to Goa today and start planning the perfect vacation!
All of the above cost me 3 cents (Rs 2.50).
Using Stability Diffusion
I decided to then try my hand at image generation.
I gave the following prompt to Stability Diffusion’s DreamStudio: “How does Generative AI look to an email marketer”, and got this in a matter of seconds:
Probably a bit too abstract for me!
“Email marketer at work” delivered the following:
I then decided to imagine myself as an email marketer.
“A New Year Mail to sell iPhone 14” got the following:
“An email to sell a vacation in Goa” gave this output:
If these responses are an indication (and remember that the underlying tech will get better very fast and I had done no tweaking to the default options), email marketers (and recipients) are in for a textual and visual treat!
More on AI – 1
NYTimes: “Over the past 10 years — a period some A.I. researchers have begun referring to as a “golden decade” — there’s been a wave of progress in many areas of A.I. research, fueled by the rise of techniques like deep learning and the advent of specialized hardware for running huge, computationally intensive A.I. models. Some of that progress has been slow and steady — bigger models with more data and processing power behind them yielding slightly better results. But other times, it feels more like the flick of a switch — impossible acts of magic suddenly becoming possible. Just five years ago, for example, the biggest story in the A.I. world was AlphaGo, a deep learning model built by Google’s DeepMind that could beat the best humans in the world at the board game Go. Training an A.I. to win Go tournaments was a fun party trick, but it wasn’t exactly the kind of progress most people care about. But last year, DeepMind’s AlphaFold — an A.I. system descended from the Go-playing one — did something truly profound. Using a deep neural network trained to predict the three-dimensional structures of proteins from their one-dimensional amino acid sequences, it essentially solved what’s known as the “protein-folding problem,” which had vexed molecular biologists for decades.”
The Economist on “foundational models” (June 2022):
The conceptual breakthrough needed to make full use of this [computing] power came about in 2017. In a paper entitled “Attention is all you need” researchers at Google and the University of Toronto described the novel software architecture to be used by Google’s bert. They had thrown away all the mechanisms which worked on input data sequentially, mechanisms researchers had previously seen as essential; instead they just used a mechanism that looked at things all at once. This new approach meant that programs could “pay attention” to patterns they had learned were salient in a field of text, rather than having to work through it word by word.
Such models are trained using a technique called self-supervised learning, rather than with pre-labelled data sets. As they burrow through piles of text they hide specific words from themselves and then guess, on the basis of the surrounding text, what the hidden word should be.
It has since turned out that all these clever techniques can be applied to sequential data other than language, including pictures, videos and even large molecular data sets. Instead of guessing the next combination of letters, graphical models such as Openai’s dall-e predict the next cluster of pixels.
More from The Economist: “Foundation models are the latest twist on “deep learning” (dl), a technique that rose to prominence ten years ago and now dominates the field of ai. Loosely based on the networked structure of neurons in the human brain, dl systems are “trained” using millions or billions of examples of texts, images or sound clips…Foundation models have some surprising and useful properties. The eeriest of these is their “emergent” behaviour—that is, skills (such as the ability to get a joke or match a situation and a proverb) which arise from the size and depth of the models, rather than being the result of deliberate design. Just as a rapid succession of still photographs gives the sensation of movement, so trillions of binary computational decisions fuse into a simulacrum of fluid human comprehension and creativity that, whatever the philosophers may say, looks a lot like the real thing. Even the creators of these systems are surprised at their power.”
More on AI – 2
Sam Altman (CEO of OpenAI, creator of GPT-3, in June 2021): “I think we have just begun the realm of A.I. being able to be what we call general purpose A.I. instead of these narrow models that can do one thing. So now we have models that can understand to some sense, to some definition of that word, the world, what’s going on, and then a single model can accomplish a wide variety of tasks for someone and can learn new things pretty quickly. So last year, OpenAI released something called GPT-3. It’s a large text model. And that one model is now being used by thousands of people — developers more than that in terms of end users — for all sorts of tasks. And I think we will see, in the coming years, A.I. medical advisors that give anyone really high quality medical advice, higher than they can maybe get anywhere in the world, A.I. tutors that can teach you math or other topics. We’re starting to see people develop A.I. programming assistance that can help you write code. And these same models are being adapted for all of these different uses. And I think we’re starting now heading into a world where a lot of the things that people want, they can, through a text interface or a visual interface, have an A.I. help them do.”
Tyler Cowen: “More AI does not necessarily mean fewer jobs. Automation and mechanization have been around for centuries, and they create jobs as well as take them away. The job created may be direct, as in robotics production and the surrounding infrastructure, or indirect, as when AI-supported animation is sold and marketed…. The benefits of AI do not accrue only to those in the technology sector. AI makes many goods and services cheaper, and that in turn benefits the poor and disadvantaged. If software routes packages and shipments more efficiently, then transportation costs will be lower. If software and AI programs help economize on the use of electricity, then it will be easier to mitigate climate change. As computational biology improves health care, the sick will benefit… AI also is going to require a lot of job retraining. More jobs will require new skills that involve working with software, sometimes in the form of AI.”
HBR: “The promise of AI’s prediction technology is similar to that of electricity and personal computing. Like them, AI began by resolving a few immediate problems, creating value in isolated, tightly bounded applications. But as people engage with AI, they will spot new opportunities for creating solutions or improving efficiency and productivity. Restaurants, for example, will most likely become more deeply embedded in their own supply chains and perhaps more flexible in their menu offerings. As these opportunities are realized, they will create new challenges that in turn provide more opportunities. So as AI spreads across supply chains and ecosystems, we will find that all the processes and practices we took for granted are being transformed—not by the technology itself but by the creativity of the people who are using it.”
Raj Venkatesan, author of “The AI Marketing Canvas: A Five-Stage Road Map to Implementing Artificial Intelligence in Marketing”: “An effective strategy is to use AI to personalize each aspect of customer engagement; acquisition, retention, growth, and advocacy (or word of mouth). Modern marketing is about using first party consumer data and algorithms to personalize the firms’ marketing for acquisition, retention, growth and advocacy. The idea is to put the customer in the center and use AI to enhance the customer’s experience with the brand… The five-step roadmap provides marketing managers a tool to plan and develop their AI marketing strategy. It consists of five steps; foundation, experimentation, expansion, transformation, and monetization. Foundation is about collecting relevant first party data that is the essential input for any AI algorithm. In the experimentation stage, a firm attempts to personalize one aspect of customer engagement. In the expansion stage, the firm personalizes more than one aspect of customer engagement, and in the transformation stage, all aspects of customer engagement are personalized. In the monetization stage, the firm uses all the developed AI capabilities to develop a services/software platform that provides on-demand services to other business, thereby developing a new revenue stream.”
Em-AI-l Marketing – 1
Now, let’s start putting it all together in the context of email marketing. It is all about the right Rs: right person, right message, right time. As I had previously mentioned, there are 8 elements in an email that we need to discuss. Four of them are where some AI is already being used (subject, segment, time, delivery); the other four are where AI can be applied in the future (header, body, footer, landing page).
Subject: Subject line optimization (SLO) has been around for some time. From the Netcore AI in Email Marketing report: “Subject lines are optimised based on previous history, an input of spam words to avoid, and can generate multiple subject lines in seconds, that used to take marketers hours to create.” More from YourStory: “AI can [use] machine learning algorithms to analyze large amounts of data in a short time and find the best-performing subject lines for specific groups of people. AI can help you personalize the subject line based on a recipient’s past and current behavior, i.e., previous purchases, clicks on specific links, location, and other data points available within an organization’s database. For example, you can create a unique series of subject lines for baby boomers and another one for millennials, based on what worked best for these audiences in the past.”
Segment: Segmentation is about figuring out the right target audience for a campaign. While in the future each person could be sent a personalised message (segment of one) every time, most brands are not there yet. So, the focus for AI is on predicting the right segments for a campaign. Netcore: “Two basic approaches can be used, which will no doubt be expanded on as we move into this “new world” of marketing. Experiences can be created individually based on each user’s own behaviour, or marketers can leverage modeling to create journeys based on activities of similar users.” YourStory: “With the help of AI, marketers can segment their email lists based on data collected from existing customers. This data includes purchase history, browsing behaviour, and preferences. The AI algorithm uses that information to create segments and deliver personalized emails to each one of them.”
Time: Send time optimization (STO) is also common. AI works out what is the best time band to send an email campaign at an individual level, based on past email activity. Netcore: “With machine learning, the optimization is 1-1. The time to deliver a particular message is determined based on when the user is actively checking email, based on previous behaviour of that user. With Send Time Optimisation, marketers can begin to get serious about Flash sales or limited time offers, knowing they are more likely to be sending when the user is active.”
Delivery: From Netcore: “If a sender (or marketer) hits more than a certain number of invalid recipients in a minute, ISPs like Gmail would grow suspicious and slow down the acceptance rate of emails from that marketer. The AI layer looks for and understands these minute signals, fixes the root cause and feeds this as a pattern to not occur again, safeguarding the email infrastructure.” AI can also help with domain/IP warmup. “During the warmup process, the number of emails a marketer should send each day is based on the performance of the emails previously sent, plus the engagement those emails generated. Depending on the success of the previous emails, the strategy may need to be adjusted. Machine learning can help determine the exact strategy that’s needed to continue and improve warmups.”
Em-AI-l Marketing – 2
We will now take up four elements where techniques like Generative AI will make a huge difference in the future. The email header, body, footer and landing page can all be improved using AI. This goes beyond just personalised recommendations in emails, which is already being done by many marketers. This next phase in email engineering is about crafting a message for an individual by creating a unique email to maximise the probability of action. This is where AI can combine with AMP and Atomic Rewards to deliver 10-50X more conversion actions. There is plenty of room for improvement: just 10 in 100 emails are opened and only 1 is clicked. With such high ignore/delete rates, email marketing can indeed do with some AI magic.
Header: Emails need a header – like a Search bar. Websites and apps have prominent search bars, and much of our daily life revolves around doing searches through the browser or on our mobile phone. Yet emails have lacked a search bar because they haven’t been interactive. AMP can change that by showing the results within the email right below the search bar – and providing an instant “Add to Cart” option and perhaps even a “Pay” action. The AI-powered search bar could have pre-filled text to nudge recipients to do the search – based on previous history. The results shown need to be a handful because most emails are now opened on mobiles.
Body (images and text): The email body is the heart of the message. This is what takes up most marketer time. Typically, agencies help with the creatives. The strategising and designing can take up time. Marketers can only create a limited number of creatives. As such, email body content also tends to be repeated. This is where generative AI can come up matching an infinite pool of text and images to user personas. No such system exists as of today; it will need to be built. Brands and email service providers (ESPs) have a huge amount of data which could be used. In fact, even the templates used for sending emails could be personalised. The marketer should just need to say, “I want a Valentine’s Day campaign sent out to my Best Customers” – and the campaign gets executed.
Footer: As I have explained earlier, there is a lot of room for innovation with the email footer by including brandlets, gamelets and ads. AMP and Atomic Rewards are central to these components. Where AI comes in is by creating the right AMPlets for each consumer. Think of the footer the way Titok uses the mobile screen: showing one video after another to capture information about user actions and continuous personalising. Every AMPlet is an opportunity to get to know the consumer better – the interactivity provided by AMP and incentives offered as Atomic Rewards make it easy and exciting for the consumer to respond.
Landing page: While AMP can do away with the need for landing pages by moving the conversion funnel inside the email, if marketers want one, Generative AI combined with machine learning can create beautiful, personalised pages which nudge consumers to actions. Think of these as disposable landing pages – created just for the moment by a machine.
AI is the future of email marketing. So far, AI has helped in campaign management with the help of machine learning. Now, the time has come for AI to be used in the creative process. By removing friction from a marketer’s life, AI is the enabler for taking email marketing to even greater heights and RoI in the years to come.
Email – which barely saw any innovation in the past 15 years other than text to HTML – is now getting a new life (think Email 2.0) with all the new innovations: AMP, Atomic Rewards, Ads and now AI. By helping marketers conquer the crux of the brand-customer relationship and creating 2-way hotlines, marketers can slash AdWaste and push further and faster on the path of profitability.