AI agents and the future of work; 'invisible' deepfakes; robotaxis' tipping point?
⏩ Future Normal: Fast Forward #107
This week, three fascinating glimpses of the future normal:
Salesforce’s AI Agents; and augmentation over autonomy.
Flux’s amateur-looking AI; and the challenges of normal, ‘invisible’ deepfakes.
Waymo’s tipping point; and converting over preaching.
Remember: no one can predict the future. However, by asking the right questions you will be inspired to create a better future.
Use these snapshots as a jumping off point for the bigger, cross-category, cross-functional questions about what will become normal.
Side note: As we approach Q3/4 planning season, many of you will be looking ahead to 2025.
I’d love to help.
The two main things I can bring to your next event:
VisuAIse Futures: a new interactive, ‘multiplayer’ AI-powered creative experience.
Your Future Normal: a non-obvious trend keynote to reimagine your business.
Read more about these at the bottom of this email, if you’re interested.
Note: the Q3/4 event ‘season’ is filling up – from September I’ll be doing sessions in Berlin, New York, Las Vegas, Rome, Greece, Barcelona and Riyadh, as well as staying closer to home in London, Bournemouth and Berkshire :)
If you’d like to discuss bringing me to your next event, please do mail Renee Strom on renee@ideapress-speakers.com.
Einstein: Salesforce’s autonomous AI sales agents
Salesforce has followed up from launching its AI customer service agent in July, with an announcement that it will launch two AI agents for sales teams – its SDR Agent can engage with inbound leads to answer questions, handle objections, and book meetings for human sellers; while its Sales Coach Agent helps sales people practice meetings and calls.
A few observations:
1/ AI agents are coming. Kind of. AI evangelists paint a picture of widespread agent-based automation. I suspect that it’ll be longer than we think before fully autonomous agents handle a meaningful proportion of our daily tasks, end-to-end.
Instead, it seems much more likely that very soon we’ll exist within various networks of agents who do the easy-but-boring bits of many smaller tasks.
My sense is that this won’t feel particularly revolutionary. It’ll just be a slightly more personalised and responsive experience than using a company’s website or app. Indeed, that sense of ‘meh’ would be a profoundly positive signal.
The moment something becomes normal it means it’s unremarkable. It’s won. Compare that to right now, when too many AI features simply offer a worse experience.
2/ Do we really want AI transparency? Zoom out and it’s blindingly obvious why employees and companies want these tools. Salesforce’s data shows that sales people typically only spend 30% of their time actually selling.1
Yet one snippet jumped out at me from the video: all the AI-generated emails are signed “Sent by [human rep’s] AI agent”.
We can all understand why Salesforce has done this. Yet this decision will massively limit the ability of these AI agents to handle anything more than basic, functional exchanges. Recall the story of Koko, a mental healthtech app which tested using AI to write supportive messages for its users:
People preferred the AI-generated messages. Right up until they learned they were written by an AI. As its CEO said, “simulated empathy feels weird, empty.”
However as AI’s capabilities improve, I suspect smaller and less visible (and less ethical) businesses will start integrating AI-generated comms without disclosing them as such. The seedier side of AI influencer culture offers a dystopian glimpse of this.
Will AI messages be confined to more ‘functional’ comms (which seems to contradict all the data we have about people forming emotional bonds with avatars)?
Or will we just end up getting so many AI-generated messages that we’ll evolve to stop caring (my parents look in horror at my home screen with its 133,957 unread emails, 75 unread WhatsApp messages, 106 unread texts, and 322 Slack notifications).
Or will smart UX designers find clever ways to merge human and AI-generated comms in ways that our brains find less jarring?
3/ Augmentation > Automation. Salesforce’s Sales Coaching Agent is equally, if not more, fascinating. Watch the video closely and you’ll see users being given prompts on what to say during live calls. For example, if a prospect mentions a competitor, the sales rep can be shown a list of features that the competitor doesn’t offer.
It’s hard not to get unreasonably excited about these use cases (well, if you’re a learning geek, like me). Instant, always-on, contextual feedback and advice. These are the copilots that people want!
One of my foundational beliefs about AI is that it will enable people to do ‘better’ jobs – i.e. more creative, more highly-skilled, and ultimately better paid. Yet the followup challenge is often – “how will people develop the skills to do these new jobs?”
In my recent presentations, I’ve been using Microsoft’s demo of its Copilot helping someone learn how to play Minecraft. Salesforce’s Sales Coach Agent shows how these copilots will translate into practical, professional contexts. Put simply:
AI will help you learn new skills faster, right when you need them.
Flux LoRAs & ‘invisible’ deepfakes
You’ll read a LOT about deepfakes in the second half of 2024, thanks to the US election. And while deepfakes are a big challenge for our political systems, in some ways they are also the least risky examples of this technology – when Donald Trump shares an image of Swifties for Trump, thousands of people analyse every detail.
Instead, I’m far more worried about the less obvious, normal examples that are around the corner.
Check out these AI images, generated with the ‘AmateurPhotographyV2’ Flux LoRA2 – a fine tuned variant of the new Flux image generator that is designed to create much more natural-looking images.
The deepfake paradox is that the most provocative or improbable celebrity deepfakes going viral on social media are rendered harmless by their very attention-grabbing nature.
Counter-intuitively, it will the deepfakes that you never see that will be the most problematic.
The deepfakes that circulate among a small community. Like this wild story about a Baltimore school teacher who created a deepfake of his principal saying racist comments in an attempt to avoid being investigated for fraud. He was discovered, but it’s easy to imagine that slightly less damaging deepfakes will be very hard for targeted individuals to defend against.
Already, 1 in 10 minors believe their friends have created deepfake nudes of people they know. Currently, most popular AI-image generators produce images that look a little too glossy to be real. What happens when they look indistinguishable to those taken by a smartphone camera?
I don’t have the answer.
In The Future Normal, we wrote about Truepic and its technological solution to watermarking images. Yet it’s also increasingly clear that we’ll also have to develop new social behaviours and legal structures that protect our ability to trust images, video and audio we encounter. But this is very much a working thesis. I’d love to hear your thoughts.
Waymo is doing 100,000 paid robotaxi trips per week
Futurists have been talking about autonomous vehicles for so long they seem like a unicorn technology (back in the days when unicorns were mythical beasts, rather than billion dollar startups).
But for residents of its current markets (Los Angeles, San Francisco and Phoenix), the future is now, and increasingly normal – the CEO of Alphabet-backed Waymo announced it was now doing 100,000 paid trips per week, up from just 50,000 a few months ago.
As I was writing this, Azeem Azhar published a great piece exploring whether we’re reaching a tipping point when it comes to self-driving cars. It’s well worth a read if you’re keen to explore the technological background and trajectory more.
But here I want to focus on one quick human observation that he highlights, about how once people ride in self-driving vehicles, their trust skyrockets – “respondents in Phoenix and San Francisco who have been exposed to self-driving overindex the average American’s trust by 30 points, 67 to 37.”
This is a crucial datapoint given the current media skepticism around new autonomous technologies (at least in most ‘Western’ markets) and suggests we’re far closer with self-driving vehicles than other ‘revolutionary’ technologies that failed to wow early adopters, such as crypto and the metaverse.
It’s also a great reminder – while I try to bring you non-obvious insights into the future normal, these always need to align with the blindly obvious truths that won’t change.
Convert, don’t preach. A great customer experience will always beat novelty.
Can I inspire your team to seize the future?
This year I’ve delivered 20+ sessions, both live and virtually – from Brazil to Saudi Arabia, Slovenia to Shoreditch.
As well as my usual trend & innovation keynotes, I’m hugely excited about the reactions to my newest offering – VisuAIse Futures.
It is an interactive, ‘multiplayer’ creative experience that will see your audience think differently about AI:
“It was so refreshing to hear how AI can be used to power human imagination, rather than replace it. And then it was even better to actually experience it”
“Fantastic session! Hugely insightful and fun, too!”
“Brilliant. The feeling in the room was positively intense whilst the images were coming through!
Feel the optimistic vibes it will bring to your event in the 2-minute video below.
If you’d like to discuss bringing me to your next meeting or event then please do reach out directly to Renee Strom or check out my speaking site.
Thanks for reading,
Henry
Indeed I’d be curious if there were any information-based workers in large corporates who actually spend the majority of their time doing the primary task related to their job? If that’s you, get in touch and I’ll send you a copy of my book. Clearly you’ve got time to read it ;)
If like me, you’re asking “what’s a LoRA?”, then here’s Perplexity to the rescue