WSJ: OpenAI Chairman on Elon Musk Bid and the Future of AI Agents

WSJ: OpenAI Chairman on Elon Musk Bid and the Future of AI Agents

Disclaimer:

This blog post was auto-generated by Accunote AI, aiming to make audio knowledge sharing more accessible. While we strive for accuracy, please note that AI-generated content may contain minor inaccuracies. The original ideas and insights belong to their respective creators: @WSJ News


Host 00:00:00
So we are talking about AI agents, and there does seem to be a little bit of what are AI agents and what do they actually do. So now as the cofounder and CEO of your new company Sierra, can you tell us what is an what is an AI agent and what can it do?


Bret Taylor 00:00:16
Yeah. I'll take it maybe from the academic context to the more practical, especially given the audience.
Bret Taylor 00:00:22
So the word agent comes from the word agency, and I like to think about it as having a piece of software and affording it some ability to make decisions and have agency. And that level of autonomy, I think, is really distinctly different than most of the software that preceded it.
Bret Taylor 00:00:39
I like to think of agents in really 3 categories, and I'll I'll, strategically end with the 1 that my company does. But, let's start with the others. The first is personal agents.
Bret Taylor 00:00:50
And I think that this is a domain that will probably if you look at companies like OpenAI or Apple or Google, all them will be working in this domain to make agents that act on your behalf as an individual.
Bret Taylor 00:01:04
There's the proverbial help you plan a vacation, and help triage your email inbox within the context of a company. There's probably a lot that agents can do to, you know, sort of be that proverbial Iron Man suit for all of us, to help us be more productive.
Bret Taylor 00:01:18
I think this is an area that gets a lot of press, but is perhaps the farthest out to really be in industrial grade. If you think about how much a personal agent has to generalize, to be effective, the technical information security integration challenges are formidable just because the universe of things that we interact with as an individual is so broad.
Bret Taylor 00:01:40
The other class of agent that I think is a bit more, practically deployable today is persona-based agents. So agents that can code, agents that can review contracts, agents that can analyze a sales report.
Bret Taylor 00:01:55
At at my company, all of our engineers use Cursor, which is 1 of the companies making coding agents. I've talked to a number of folks in this room about Harvey, which is a legal platform, that's doing, sort of the work of, at least as I understand it, paralegals, though I'm not a legal expert. So, someone at Harvey is probably a win scene right now, but, probably underselling their product.
Bret Taylor 00:02:17
I think that the reason I say this is a bit more practical right now is within the context of a specific persona, specific role, the systems you integrate with, how you implement deterministic and probabilistic guardrails, the different people interacting with those agents are just much more well defined. So you narrow the scope of generalizability, and you go from the domain of science to the domain of engineering. And I think that's why you're seeing a lot of really interesting applications of those as well.
Bret Taylor 00:02:47
The final category, which is the 1 my company, Sierra, works on is, the agent that represents your company's customer experience. And the way I think about it is, if we were here in 19 95, I'd probably be pitching you all and making a website, you know, and talking about how until your company has a website, you don't exist digitally. And, maybe 10 years ago, I'd be pitching you on you should all have a mobile app, and be listed in the App Store, which would be meaningful for a subset of your of your companies out there. In 20 25, I think, every 1 of your companies will have an AI agent that has your brand at the top of it that, is as complete of a customer experience as your website or your mobile app. If it's an insurance company, it will help you file a claim or add someone to your premium.
Bret Taylor 00:03:38
If it's a consumer electronics company, it might help you with technical support. If you're a telecommunications company, it might do subscription churn management and help convince people not to cancel or upgrade or downgrade their plan. Everything you can do on your website, your agent will do as well. And I think this one's special to me because, when you think of the way your CEO might think of the impact of agents, the 1 with your brand on the top is particularly special and particularly important.
Bret Taylor 00:04:07
And just like if, I love going to the Wayback Machine and looking at websites from 19 95, and they're surprisingly narrow. You know, they're welcome to the world wide web. You know, here's our personal location.
Host 00:04:19
Yeah. I don't know how many people you have working on your dot coms today at your company, but it's probably fairly big. It's fairly distributed. It's not 1 project. It's managed by multiple teams because it really is the entirety of your company in a digital form factor. And I think most agents today are largely customer service, and that's really our sweet spot. But I think if we fast forward, we'll look at those agents with the same level of, oh, isn't that cute?
Bret Taylor 00:04:44
You know, and it will come to encompass your entire company customer experience. And I think that's really exciting. And, anyway, those are the 3 classes that I see. I don't think the last 2 are more, practically deployable today because, again, I think by narrowing the domain of how much the agent has to generalize, you can engineer your way around some of the structural shortcomings of models today.


Host 00:05:07
Right. And what you're describing sounds really great and, it maybe is some some point in the future more widespread. But at the moment, so many enterprises are still sorting through what generative AI means for their organizations, how to get value out of utilizing various language models, sorting through AI governance. And so AI agents seem pretty far off, and we'll go to a polling question in a bit that gets to, how many of you are actually using agents. But what's your expectation for when they actually become this this future that you're describing? Because right now, it's not quite there.


Bret Taylor 00:05:38
I think it's important to start with business problems rather than technologies. And I and I think the, put another way, I'm not sure it's constructive to start with the premise of how can we deploy more agents at our company because that's not a goal into itself.
Bret Taylor 00:05:58
The way I think about it is, you know, like, what are the domains, at your well, I'll just take, customer service, an area that Siro serves directly.
Bret Taylor 00:06:06
You know, the average price of a phone call to, like, a typical consumer brand will be anywhere between 5 and 20 dollars, maybe mainly the, labor cost of answering the phone.
Bret Taylor 00:06:18
That for a large consumer brand means that their contact centers are a meaningful operating expense, you know, for for almost any large scale, brand.
Bret Taylor 00:06:28
I think that, you know, when you think about the opportunities to sort of realize the impact of AI right now, being able to pick up the phone for 1 or even 2 orders of magnitude less money, really changes the game in in 2 meaningful ways. Number 1, obviously, you can save a lot of money, which is great. Your CFOs will be really happy about that. But I think I'm more excited also about the second order effects, which is if you take something that was between 5 and 20 dollars, if you talk to you know, I there's a wide range of brands represented in the room, but most interactions aren't worth 5 to 20 dollars, which is why it's hard to call most companies on the phone. You know, you've probably talked to your cable company or your insurance company.
Bret Taylor 00:07:10
You know, try calling Google on the phone. I wouldn't even know how to I work there. You know?
Bret Taylor 00:07:15
It's and it's in part because if your average revenue per customer is below the incremental cost of that interaction, it's almost impossible to fulfill. And, it just it just doesn't work. The math doesn't work.
Bret Taylor 00:07:28
I actually think that AI agents will bring down the cost of these personalized interactions so much that it affords the opportunity to actually have, you know, orders of magnitude more personalized conversations with customers. And I bring that up, and it was a sort of a roundabout answer to your question, which is I think there's actually opportunities to deploy these things right now. You know, 1 of our customers is ADT, the home security company. Probably many of you have them in your house. Next time your alarm's on the fritz, chat with our AI, send me feedback. You You know, the t in ADT stands for Telegraph. They started as home security over Telegraph, which I think is pretty cool just to, like, imagine that that value proposition. I imagine myself as an entrepreneur pitching that back in the day. And I think it's pretty neat that, you know, now they're in the era of AI agents, and they're a hundred over a hundred 50 years old, and they're deploying this technology.
Bret Taylor 00:08:19
Every single I imagine many of your engineering teams are using Copilots already for their development.
Bret Taylor 00:08:25
I think there are practical solutions now, and I just think that companies, if you don't boil the ocean proverbially and you say, let's start with some use cases that have solutions now, learn not by saying, how do we solve AI governance in the abstract? Abstract. That's an incredibly hard thing to do. We say, okay, let's say we automate customer service. We empower everyone, an engineer with a coding agent or a co pilot. Let's have our supply chain contractor views done by AI.
Bret Taylor 00:08:53
All of a sudden, you narrow the scope of what governance means. You narrow the risk. Your chief compliance officer can say, okay. I understand the the blast radius of this application. And all of a sudden, you get from these big abstract problems to much more solvable problems. And the more we go after these practical things when you're in this discussion about AI governance 3 years from now, you've got 4 successful projects, hopefully no failed ones, and all of a sudden you can have those bigger discussions. So I'm I'm an engineer by trade.


Host 00:09:21
Road map. Yeah. Let's take a look at the polling question and see how many folks in the audience are actually using agents.
Host 00:09:28
Okay. So we've got the vast majority saying that they're experimenting with them, which is fantastic. It sounds like that's that's that's the the road map to get started.
Host 00:09:37
1 other thing I wanna ask you about, Brett, is that the technology around AI agents is changing so quickly. Think about reasoning models, for instance, that are supposed to be very important for advancing AI agents, doing things like chain of thought reasoning. But at the same time, how can companies like yours keep up with the pace of change as your customers need to ensure that you're providing them with the right set of governance and guardrails in place? Because we all know that agents aren't necessarily reliable in all instances.
Host 00:10:06
So I wanna ask you about DeepSeq, and it's r 1 model, the the Chinese firm DeepSeq's sort of revolutionary, r 1 model supposedly.
Host 00:10:15
What kind of impact do you think that technologies like r 1 have on technologies like AI agents and for enterprises?


Bret Taylor 00:10:24
Well, first, it is the rate of change in, the foundation models and frontier models is unlike I've ever anything I've ever experienced in my career. So, and it's hard for me as an insider to keep pace. I can't imagine what it's like for all of your teams as well. So just acknowledging that that we're in a period of really rapid change.
Bret Taylor 00:10:46
Just to give sort of, like, 1 metric of this rate of change, roughly 2 years ago, GPT 4 was the, best frontier model available.
Bret Taylor 00:10:57
And I might get these numbers slightly wrong, wrong, but it's roughly right, which was about 60 dollars per million tokens of output. So that was sort of the the cost of using that model.
Bret Taylor 00:11:08
Tokens is sort of a weird metric, but it made it fairly expensive and fairly slow, which meant the applications of that technology were necessarily a bit narrow. If you have a fraud detection system that gets triggered, you know, a hundred million times a day, you probably wouldn't wanna use that model, to inspect fraud. It probably wouldn't be cost effective.
Bret Taylor 00:11:30
Now, GPT 4 0 Mini Mhmm. Which by most evals is strictly better than GPT 4, I'm sure there might be some where where it is not, but most it is Mhmm. Is 60 cents per million tokens. So that is over a hundred times, you know, more cost effective in a in a couple years.
Bret Taylor 00:11:52
The Moore's law, I you know, when I was growing up, we get a new computer, it was always better and cheaper, which is a really fun way to grow up as a computer geek, but it wasn't anything close to this pace. So 1 thing I always try to remind myself is if you look at the path from GPT 3 to now, including things like DeepSeek and o 3 Mini, which which, OpenAI released recently, you're seeing this, almost blistering pace of improved quality and dramatically decreasing costs. And 1 thing that as you're thinking about deploying AI is I think you should plan for that. I think that is if you're planning for both the quality and the cost of models as they exist today, you're planning for the past. And that's a it's kind of a weird thing to wrap your head around because, you know, it reminds me I I met a a guy who made computer games in the nineties, and they'd joke that they were always, testing the game on a computer that wasn't for sale yet. Because by the time they were finished with game development, you know, the graphics cards would be that's kind of the way we are now. And so I think, until we see, around algorithms, compute, and data some structural reason why this will slow down, I think it is rational to plan, for this dramatic increase in quality at dramatically less cost, which, I think is really exciting.
Bret Taylor 00:13:13
How we plan for it, we're an applied AI company, at Sierra. We're not a research lab. We believe that 1 of our value propositions is governance, guardrails. And in particular, because we help companies like, SiriusXM and Sonos and ADT build their customer experience, you shouldn't have to redefine or reimplement your customer experience every time a new model comes out. You know? So 1 of the main values we provide is a durable way to implement your customer experience, the guardrails, and the governance they can absorb this new technology but not require a new implementation.
Bret Taylor 00:13:48
And I think if you just, you know, Mark Twain said history doesn't repeat itself, but it rhymes. If, the AI market rhymes with the cloud market, what we do at CIRA is like software as a service. What the models provide is more like infrastructure as a service, and there's trade offs. You get more control if you build it yourself, but then you build it, you own it. Right? And so software is like a lawn. It has to be maintained. And, you know, most of you probably probably have shifted more towards software as a service over the past decade as you've analyzed sort of total cost of ownership. I have a strong and, obviously, biased, but strong intuition that the same will happen in AI, that actually having your teams reimplement, reprompt engineer, redo governance, is actually not a cost. It should be individually shared by every company in the world, but should be, you know, essentially abstracted by of companies providing solutions to those problems. Anyway, I have a bunch of opinions on it. That's my take, and I think there's a reason why software as a service exists, and there's a reason why applied AI agents Right. I think will be a meaningful category.


Host 00:14:50
I I do have a question for you around the sort of disruption of AI agents to software as a service, especially given your background at Salesforce. But I wanna stick with OpenAI for a moment where, you're the chairman of the board. And, just in this last day, we have this massive 97400000000.0 dollar takeover bid from the likes of Elon Musk and his group of investors. So tell us what what's going on there, and how is OpenAI responding?


Bret Taylor 00:15:16
Well, I think most of you are aware OpenAI is a nonprofit. And what that means is the OpenAI board has a fiduciary duty to our mission exclusively. And our mission is to ensure that artificial general intelligence benefits all of humanity.
Bret Taylor 00:15:31
So, it's pretty simple for me, which is OpenAI is not for sale.
Bret Taylor 00:15:36
And our job as a board is to exclusively decide what's, benefits our mission. And, and, as a consequence, I think, this is largely a distraction, and I think the board's gonna continue to exclusively focus on the mission.


Host 00:15:49
So in in what way is a distraction from from your mission of I I just think I mean, it's I guess, OpenAI is not for sale. So, you know, there's a bunch of press. We're talking about it right now, which is is 1 1 step above the distract. No. No. I understand. And and, my whole point is, like, you know, our job, it's interesting because we are mission driven. Especially as a board, our job is very simple, which is to basically evaluate every strategic of the decision of the organization through that 1, test, which is, does this actually further the mission of ensuring AGI benefits humanity? And I have a hard time seeing how this would.
Host 00:16:27
Okay. Got it. Let's go back to agents. We have another polling question around what is your biggest concern around using AI agents? And 1 of the things that we talked about is, well, here we have 1 about our agents the next big thing for AI. We can cover this too.
Host 00:16:44
A small majority say yes, definitely, which may be good news for Sierra.
Host 00:16:49
A slightly smaller group say maybe. And what's your primary concern around using AI agents inside your organization? Looks like most say reliability followed by cybersecurity and data privacy. I know you've talked about having these sorts of guardrails in place and, being Sierra being at the application layer kind of helps abstract that away from your customers.
Host 00:17:10
But again, to my question around age AI agents hitting more of the mainstream, what do you find are the biggest sort of roadblocks to a company like maybe ADT or any of the others you mentioned saying, yeah. Sure. We'll let agents sort of run amok. And what if something goes wrong? Does Sierra take the blame? Do I take the blame? How do you answer those sorts of questions?


Bret Taylor 00:17:28
I think that's the right worry. So I I would have probably had if you had pulled me and I stack ranked, I probably would have had a very similar list, to all of you.
Bret Taylor 00:17:37
Reliability, I'll sort of break that down as the way I see it.
Bret Taylor 00:17:42
Number 1 is is simply robustness.
Bret Taylor 00:17:44
You know, based on the guardrails that you wanna impose for, say, a Gentek conversation.
Bret Taylor 00:17:50
Let's just take a retailer. We have lots of retailers who just went through Black Friday, Cyber Monday. Does the agent abide by your return policy? If someone has a warranty claim, does it abide by the, rules of of your warranty claim process?
Bret Taylor 00:18:04
If, if someone asks for information on a product, does it give correct information or does it hallucinate?
Bret Taylor 00:18:11
And, that's the base level, which is essentially, does it actually abide by the guardrails?
Bret Taylor 00:18:16
This is not easy necessarily, but, again, I do think that in the world of agents, 1 of my principles is can you narrow the domain such that you can use, existing technology to implement, either deterministic guardrails or sufficiently, accurate, AI based guardrails such that it meets your business requirements. And I do think for any given problem, if you can narrow the domain sufficiently, I think the answer is yes on today.
Bret Taylor 00:18:46
I think there's a more nuanced thing, though. 1 of our first retailers, sold shoes, and, we worked with them on their agent. And, they did a lot of post purchase, interactions like where's my order, order returns and exchanges, warranty claims, things like that. And the first session was, I'm going to a wedding in Hawaii. What sandals will go with my bridesmaid dress?
Bret Taylor 00:19:09
Which the AI was wholly not equipped to answer. Uh-huh. You know, it wasn't something that we contemplated in the design of it. And I think that 1 of the interesting things, if you look at all of your websites and and mobile apps that you make for your partners, customers, sort of all your or or even employees as well, Your website probably has, like, a menu on it. And and probably with enough clicks, you could probably click around almost every every page on the site, even for, you know, a site with, like, a large product catalog. You're essentially giving a multiple choice question to your visitors. What of these enumerated options do you want to visit?
Bret Taylor 00:19:49
A conversational agent is a free form text box. It's sort of like going from Yahoo directory to Google, if you for those of you who are my age who remember that. You go from, like, enumerating all the possibilities to something that's more free form. And as a consequence, you know, maybe pun intended, you're giving a lot more agency to your customers. They can say whatever they want. And so you end up with this much more distinct, voice of your customer and those interactions where they will take your agent in directions you didn't anticipate.
Bret Taylor 00:20:20
As a consequence, it feels much more interactive as you're building it out. You're seeing, essentially, customer demand play out in more interactive ways. But the other more nuanced thing that I've thought a lot about, and I think this will resonate with all the technologists in the room, if you think about the last big project you worked on, what percentage of the specification was in the requirements document, and what percentage is simply the way it was implemented in code?
Bret Taylor 00:20:46
And a huge percentage is actually in code today. Like, the requirements cover the important stuff. But a lot of, like, what happens when you click this button and the Internet's out? Or some, you know, random corner case. There's some if statement in there that some engineer wrote that did something.
Bret Taylor 00:21:03
And the whole of your product experience is both what you wrote down, and just the way it was implemented. If you think of a world where there's free form text boxes becoming a key part of your experience that are much more free form, the odds that you've specified every possible behavior is almost 0. You can't enumerate all of human interaction and specify what you wanna happen. And so I think the 1 of the more nuanced things we found in our customer base is what do you do in the scenarios where it goes off script? Right. And, some of the some of the answers are easy. If it's asking about something unrelated to your brand, say, maybe politely redirect people back on topic.
Bret Taylor 00:21:46
But I think there's actually a lot of things that are actually, in that sort of zone of it wasn't quite in the standard operating procedure, but it's clearly related to your brand. How much agency do you wanna afford the AI? And I think this is an interesting kind of conversation with stakeholders in your business. It's an interesting just mindset shift, which is you can eliminate all agency from the agent and it becomes a robot, and it kind of eliminates the entire purpose of deploying the technology.
Bret Taylor 00:22:14
Or you can say, okay. Essentially, where there is undefined behavior, we're gonna let the AI reason and and think. And you have to accept the premise that some of the time it may do something that, you disagree with, that your stakeholder business disagrees with.
Host 00:22:28
Sounds like a pretty big risk though. Right?
Bret Taylor 00:22:30
It is. And the enterprises. I think it is a big risk. And I I also think it's an opportunity. And I think that, you know, 1 of the things that I've always tried to remind myself is humans are flawed too.
Bret Taylor 00:22:41
And, you know, not every employee at your company is making a perfect decision at every point, and we've created basically the right governance inside of our companies to mitigate those risks, in a variety of ways. And I think that 1 of the mindset shifts I found to be productive is don't wait for AI to be perfect to deploy it. Accept that it isn't perfect. In fact, plan for its imperfections. So rather than say, can will this AI do something wrong? Say, when it does something wrong, what are the, you know, operational, you know, mitigations that we've put in place to, either, maybe mitigate the compliance risk to deal with it? If you just think of, most of your compliance controls for those of you in public companies, it assumes things will go wrong. I mean, that's not no 1 no regulator is asking for companies to be perfect. What they're asking is you find out when something went wrong and how do you actually, remediate, the issue. I actually think it's a very constructive way of thinking about AI because, often you'll actually have existing controls at your company you can reuse, for for your people. Yeah. And I think the companies that wait until AI is perfect will see their competitors Right.
Bret Taylor 00:23:53
Vastly outpace them. And I think it's I I like that. Don't wait until your AI is perfect.


Host 00:23:58
That's all the time we have for our questions, but I wanna make sure that we have time for 1 or 2 questions from the audience. And I see an individual in the back here, wearing a dark sweater.


Questioner 1 00:24:09
Hi. Hi, Brett. My name is Mudu Sudhakar. I'm the founder of Aisera. Question for you. Are you heading towards competing with Marc Benioff to cut his licenses? Are you going to play nice with him? These are Wall Street Journal events. I want to hear the most provocative positioning of yours with Mark.


Host 00:24:25
I thought I would be asking the most provocative questions.


Bret Taylor 00:24:27
Go ahead.
Bret Taylor 00:24:28
I consider Mark 1 of my closest mentors in business. I don't know if you've ever had a boss that, you know, taught you more than you can, even give them credit for. So, I consider myself just because Super Bowl just happened, I I consider a part of myself a part of Marc Benioff's coaching tree. And, you know, he can take credit for all my success. So, I'm very, you know, as you all probably know, Salesforce, they use this term Ohana. I think I'll always be a part of the Salesforce Ohana.


Host 00:24:54
We have time for maybe 1 or 2 more. This individual with the glasses here.


Questioner 2 00:24:59
So I have a so I have a very surgical pointed question, but I wanna provide you some context. Just bear with me for a second. You, touched on DeepSeek.
Questioner 2 00:25:08
And, you know, DeepSeek achieved the performance that it achieved by doing compression, quantization, distillation.
Questioner 2 00:25:16
And what I also heard is they went into the assembly code and, optimized the model by, almost like, breaking CUDA, which, people thought it was, insurmountable fortress. So in order with that context, in order for AI agents to spread like a virus for every enterprise to deploy them, and it becomes pervasive and ubiquitous.
Questioner 2 00:25:40
If you were to, sort prioritize the top 3 technical challenges, what would those be? And I'll give you a hint. The wrong answer is probably, like, there are no challenges.


Bret Taylor 00:25:54
And you're talking about 4 foundation models and furthering the development of these frontier models? Yeah.
Bret Taylor 00:26:02
I think there's 3 challenges that I think, thankfully, are somewhat independent challenges, which which I think increases the likelihood that we will not get stuck, in terms of progress. The first is data. So, a lot of the, models that produced ChatGPT and sort of created this wave of interest in large language models were trained on language and trained on text.
Bret Taylor 00:26:29
Most of the, available text has been used already. So, probably the 2 interesting areas there are synthetic data generation and simulation, are the 2 areas of research that I think are quite interesting. Probably given the time time just ran out, I'll just leave it at that. But I think there's interesting research labs and companies working on both. The second is compute. I think, there's 2 challenges. 1 is scaling out compute, and, I think there's a ton of investment in infrastructure. And I do think that's 1 of those areas too where I think public private partnership can help just because I think, no matter how efficient these models are, they still, quality of models still improves, logarithmically with the investment in compute. And I think there there's a, increasing the scale of our compute is important. The second thing you alluded to, which is making, inference, more efficient with techniques like distillation or a lot of other optimizations, that means that you can either think longer if you want higher quality output or have it run more efficiently depending on your on your goals.
Bret Taylor 00:27:35
And the third of it is algorithmic.
Bret Taylor 00:27:37
And, you know, I think probably the most recent interesting breakthrough there was these reasoning models where companies are doing reinforcement learning on chains of thought.
Bret Taylor 00:27:48
Before that, you know, there's a lot of other breakthroughs, but, certainly, the transformers model and and improving the parallelism of of of both training and inference was a huge breakthrough.
Bret Taylor 00:28:01
And I I can't tell you what the next 1 is, obviously, since that's, like, the the whole idea is invention. But it is really interesting to me that in the past few years, you had, you know, quite a few breakthroughs algorithmically like the chain of thought that led to things like, o 3, like Deep Seek, and others.
Bret Taylor 00:28:17
When I mentioned they're independent, what's interesting about that is you can end up hitting plateaus in some models. Like, maybe we don't have another algorithmic breakthrough for another 12 months, but you can improve compute, and you can improve data and still see a return on that investment.
Bret Taylor 00:28:31
I think that's what's most promising as you're thinking about, you know, our path to something that represents generalizable intelligence, is I think the optimizations and the challenges which do exist are not interlinked. And I think when you have scientific, progress, processes where you can truly get stuck, it's often because you you have 1 problem you need to solve. And I think having 3 independent, challenges to solve, bodes well for the future of AI.

Read more