This week’s podcast is about quality datasets and the context layer. Both are needed to scale agentic AI operating systems.
You can listen to this podcast here, which has the slides and graphics mentioned. Also available at iTunes and Google Podcasts.
Here is the link to the TechMoat Consulting.
Here is the link to our Tech Tours.
Here are my past articles on data operations:
- Data Network Effects and Data Scale Aren’t Moats (1 of 2) (Tech Strategy)
- My Playbook for Data-Empowered Operations (2 of 2) (Tech Strategy)
- The GenAI / Agentic Operating Basics (Tech Strategy)
Here is the mentioned McKinsey & Co and a16z articles.
——–
Related articles:
- A Simple ABC Framework for Agentic Ecommerce (1 of 2) (Tech Strategy)
- What Merchants and Brands Should Do About Agentic Ecommerce (2 of 2) (Tech Strategy)
- The Winners and Losers in ChatGPT (Tech Strategy – Daily Article)
From the Concept Library, concepts for this article are:
- Digital AI Operating Basics 3: Digital Core
- Data-Empowered Operations Playbook
- GenAI / Agentic Operating Basics
- Agentic AI
From the Company Library, companies for this article are:
- n/a
——transcript below
00:05
Welcome, welcome everybody. My name is Jeff Towson and this is the Tech Strategy Podcast from Tecmo Consulting. And the topic for today, two problems with scaling agentic AI. Really it’s about scaling AI and agentic AI, which is kind of a turbocharged version of AI. I’ll get into that. Basically, all the problems you have with AI, they get much worse when you go to agentic because the volume increases dramatically basically.
00:34
Anyways, I’m going to go into sort of two problems with that, which I’ve been thinking a lot about. I’ve been reading and thinking about this a lot over the last couple of weeks, and I’m trying to put it all together so that this is sort of my working explanation. And basically, I mean, just to jump to the so what. The problems are, how do you create quality data sets and ensure them over time? Because that really changes everything. It changes the whole.
00:59
the whole performance cost structure, everything. There’s a massive, everyone talks about model A versus model B versus model C. All those models behave very differently depending on the quality of your data set. It’s a huge variable in all of this. And then the second one is sort of this idea of the context layer, which the simple idea is like, you can have a great data sets and you can have a good model and you can ask a simple question like, what was the revenue of this business last year? And it won’t get it right.
01:27
because it needs business context. And building out a business context layer that evolves over time is actually really difficult. So I’ll talk a little bit about the context layer. Both of those things are difficult to scale. So that’ll be the topic for today. All right, standard disclaimer.
01:46
Nothing in this podcast, my writing or website is investment advice and numbers and information for me and any guests may be incorrect. If using opinions expressed may no longer be relevant or accurate. Overall, investing is risky. This is not investment, legal or tax advice. Do your own research. That was one breath, no problem that time. And with that, let’s get into the content. Now I’ve got a…
02:07
a short list of problems I’m really trying to get into. I think they’re important from a strategy execution point of view, not just, it’s kind of interesting. There’s a lot of those. This is kind of like executing generative AI, AI business models, and where the real questions are related to digital strategy, things like that. And I kind of got here, I’ll read my list for you. I keep it on my phone. Number one, which I’ve taken a stab several times, which is just the cost of AI.
02:35
And when I say AI, I’m talking about generative AI and agentic AI. The cost structure is really not clear. I’ve done some things on the cost of correctness, what it costs to maintain models that are accurate over time, and how that can be pretty significant. And then cost structures tend to play out in motes. Some of the motes are on the demand side, but a lot of them are on the cost side. So it plays out there. So that’s bucket number one I’ve been struggling with.
03:02
Number two, which I’m doing better at is the intersection of AI and e-commerce, which I think is huge. It’s transformative. It’s scary. It freaks me out. And I’ve basically been studying Alibaba because it’s the only company, well, you could say Amazon, but really Alibaba that’s top tier in both of those and they’re leaning into this question all the way. So, I’ve written a bit about them. I’m going to spend some more time. I’m going to be out at the headquarters for a couple of weeks.
03:31
So, AI and e-commerce, Alibaba is the one I’m watching, but that intersection is hugely important. Third one is this idea of AI and a super app, which I’ve talked about before. Open AI is kind of going for this. I really think it’s a 10-cent question. I think 10 cent, which is basically in the content messaging and gaming business payments too. But I mean, they are leaning heavily into combining AI with all their products. They’re very aggressive right now.
04:01
So I’ve been looking at this idea of a super app, which is AI based, spending a lot of time with them too. I’m actually going to do a lot with Tencent over the next couple of months. We’ve already kind of talked about doing some stuff. I’ll be looking at them, visiting them a lot. Anyways, those are one, two, and three. Number four, which is what I’m talking about today, which is, you know, everyone’s good at piloting AI and agents. Everyone’s doing that. Lots of experiments.
04:29
Deploying it as a productivity tool is pretty good. Everyone’s doing that. It works pretty well. You give your employees a bunch of tools, they’re better. Okay, but taking it to the next level, scaling it beyond just productivity tools, it’s a real challenge to build sort of AI first operating systems and business models. And one of the problems is you look at certain workflows, you do pilots and you try and scale them up and stuff starts to break.
04:56
So that’s what today is about. What are the limits and challenges to really scaling AI up to the enterprise level and especially agents, which is a big deal. And then the last one, which I’ve got an article coming on this, is this idea of robots and embodied AI, physical AI, which there’s two issues there I’m trying to get my head around. Number one is just the economics of manufacturing, which I’m not deep in and I’ve got to get a lot smarter at that. So, I’m going to…
05:25
write a bit about just economics of manufacturing and why Chinese companies in particular are so unbeatable in manufactured products, pretty much everywhere. But then there’s also this idea of physical AI. Once you leave the laptop and the internet and you go out into the real world and you’re trying to foundation models, it is so much harder. The world is so much more complicated than the systems you build on computers. So, these foundation models, you
05:56
LLMs, visual models, are not nearly up to the task of taking apart the physical world. So, we get these new VLA models coming in, the idea that you can put them into Robe, that’s a whole big subject. And I’m going to visit a couple of robot companies in the next couple of months. But fortunately, China is kind of the epicenter of that. So, I’m going to hopefully be all over that. And those are kind of my five buckets, just to sort of tell you where I’m going. All right. Getting back to number four, which is, how do you scale AI?
06:25
I’ve written a couple articles about this. One was about data network effects and this idea of data scale. Usually, I don’t really buy data network effects. I think it’s mostly just a flywheel that works for a certain amount of time, but it’s not really a moat. Definitely not a network effect. Usually, what people are talking about is just talking about scale advantages you can get in data. Which is, we know what scale advantages are in things like factories.
06:52
It’s very easy to see that, understand. Data is a very fungible sort of strange asset. It’s easy to replicate. It’s hard to be proprietary. So this idea of data scale is what I’ve written a couple articles on that. I’ll put the link in the show notes. I wrote one called My Playbook for Data Empowered Operations. So, when you start looking at generative AI and scaling it up to the operating enterprise level,
07:19
Usually the first step of that is you’re talking about data. Until you get sort of data and architecture set, you can’t really do too much. So I’ll put the link in there, which is interesting. And then I did another one called the generative AI, agentic operating basics, which is basically taking apart what standard operating activities look like in a generative AI world as opposed to a digital world, which I’ve…
07:46
been writing about that forever, the digital operating basics. So I have sort of a generative AI, agentic version of that. And anyways, that’s three articles I’ve done on this. I’ll put the links in the show notes for that, but I’m kind of taking another whack at this today. So what are the problems with scaling AI and agents? Why is it so difficult? There are good polls that BCG and McKinsey and these companies do of CEOs and management teams. And they ask them,
08:15
This is literally one of the questions they ask, you what is the biggest challenge to this? And the answer, I think it was from BCG was the number one limitation and challenge to scaling these things inside enterprises is number one is operating talent. It’s very hard to get. It’s all new. Not that many people understand this stuff. There’s a lot of training going on. That’s a bottleneck. Number two is data.
08:41
And when you look at the next on the list, I won’t show you the list. It’s not that terribly interesting, but the next ones on the list are like, you know, data architecture at scale, quality data at scale, quality data sets at scale. You realize it’s data, data, data, getting that to work. um So that’s kind of the biggest problem. And that’s, know, I’m mostly talking about AI there. Once you go from AI to agents, all of that gets much worse. Why?
09:09
Because using generative AI, you’re talking about humans using it. Well, humans, they engage with these tools episodically. You might do a couple of searches. You might take a nap. We sort of jump on these tools and jump off. Once you move to agentic AI, these agents never stop working. That’s a major problem. It is a continuous level of activity.
09:37
And that means you need a continuous steady flow of quality data that they can draw on. It’s not just episodic engagement anymore.
09:49
And it’s also the volume is going to go through the roof. There’s 8 billion humans, most of which don’t do this stuff. Now 800 billion agents soon continuously active on the mobile networks, always doing this stuff. So you can see the whole scaling problem becomes much worse when you go from AI to agents. Then it gets actually a little worse because one, the agents, don’t just need one data set. They need lots of data sets. They need lots of models. So they need to access all sorts of things.
10:19
And then they start to work together as teams and they coordinate and one agent may create one part of the task. Let’s say create a spreadsheet with something. The next agent may take that task and turn it into models and send it to a customer. A third agent might take the customer response. You you’re talking about multi-agent teams. Well, again, scaling becomes difficult. So
10:44
One way I’ve heard this described, which I don’t know if I believe this, they said basically like there’s probably two ways of working that are emerging. You have sort of single agent workflows and then you have multi-agent workflows. In a single agent workflow, you get one agent using lots of different tools and using lots of different data sources, mostly sequentially. Multi-agent workflows, you get all types of specialized agents working together. Okay, well they need to work together, so you need the interfaces.
11:14
But they also kind of need shared knowledge graphs and they need sort of shared data resources that everyone agrees you can’t mix that up. So you need sort of, not just do you need consistent quality data, it needs to be interoperable. That’s a problem. So anyway, think about that AI to agents creates a big problem. Now there’s something I’ve been thinking about a lot is,
11:41
You know, I talk about scale all the time. Scale is the first advantage you can get in this world as a business. Once you go from a small company to big company, you get a lot of advantages. Some of them are competitive advantages. Some are just, hey, you’re just bigger, you have more resources. A lot of advantages to scale. How do we think about scale? We look at revenue, that’s part of it. We look at headcount. Ooh, this is a 5,000 person firm. This is a 200 person firm.
12:11
we can look at, know, that’s sort of, the headcount one is interesting to think about because what does that tell you? If one company has 5,000 people and one has 500, what does that tell you? It tells you a sort of ballpark estimate of their capabilities because we can say, well, how much can this company do? Well, that’s easy. We take the number of humans, multiply it by the number of hours a human can work in the day.
12:36
We put a productivity factor on top of it and that gives us our overall productive capacity for this business. And a large business has more humans than a small business, so it’s better. That scale. Now, actually you want to break that down in terms of workflows, but that’s the general idea. Okay, none of that makes sense in an agent world.
13:00
If we’re talking about what scale means, really what we’ve been talking about is a scale advantage. The idea of being a bigger company is better than a smaller company. That’s kind of a human-based perspective. What would a bigger company that’s mostly agents look like versus a smaller company? It wouldn’t be headcount. We would look at the number of agents. Well, that’s interesting. So you could have a small company in terms of people, 50 people, 100 people,
13:30
with greater productive capacity than a 5,000 employee. So the whole idea of where a scale advantage comes from, where the benefits of scale are, it’s kind of got a human-centric approach to that question. The more you think about it, you’re like, ooh, this is really kind of a big idea. So how would you assess a large agent-first company? Well, I’d look at the number of agents.
13:56
I would definitely look at the amount of tokens they have to use. Tokens are a good measure of productivity. A company with a billion tokens to spend, whether it’s spent by a human or whether it’s spent by agents, can achieve more than a company with 100 million tokens. Okay, that’s probably true. So we can look at the number of agents. can look at the, I think tokens are actually more important than the number of agents.
14:25
But then we also want to look at the productivity factor. Okay, you’ve got more tokens and you’ve got more agents, but that’s like with humans. Are your humans more productive? Well, how do you measure productivity? I think you look at data quality. I think data quality is the productivity factor that you multiply times the number of agents and the number of tokens to get an overall assessment of the productive capacity. It’s to some degree, it’s also about the model quality, but
14:55
If you have a big model with low quality data, is that better than having high quality data sets that you’re running through smaller models? I suspect the latter is more effective. So once you get higher quality data sets, you don’t have to use the massive LLM. You can use a series of small models if you’ve got the right data. So that’s kind how I’m thinking about.
15:22
production and what does it mean to be larger than another company? I tend to think that’s where the world’s going, but we’ll see. Okay, so let me get sort of to the so what here. There was a very good McKinsey article recently. I’ll put the link in the show notes. Basically argues like, how can you scale agentic AI? And it lists a bunch of problems, uh training, workflow, things like that. I don’t think that’s…
15:50
too uh interesting or surprising. But they had a couple good points. here’s kind of what they said. The main point was, like, scaling is all about having strong, high-quality data. That’s kind of the linchpin of the whole thing. If you get that right, things work well. If you don’t have that right, the quality of… You can run foundation models on garbage.
16:17
But you’re going to get bad results. You’re going to have to have humans engaged to check things and clear things and give approvals because you can’t trust it. You’re going to need more powerful models. It will take more tokens because the output quality and yield will be lower. So the more you can lean into strong, high quality data sets, the more everything works and scaling gets easier. Okay. What does that mean? Data set? Well, that’s really, that’s the word they use. Data set. I, well,
16:46
I would use the word data architecture because the nature of generative AI is this is not like you’ve got a database that you’re just drawing from with a standard query or SQL or whatever, and you’re reading and writing to a mostly static database. No, the amount of data that a foundation model requires
17:12
It can never be saved. It’s way too much. So it’s more like you’re building a river system that has to just flood data through the system all the times. And then your foundation models will draw on that. But you can’t save it. I this is why every time you deal with a foundation model, LLM, it always forgets what you’re talking about, about three to four questions in. There’s just way too much data required to do this stuff. So the memory becomes unworkable.
17:40
So you’re really, I mean, I view it sort of as a river system that you’re building, but within this river system, you need various processing plants. You need to ingest the data, you need to tag the data, you need to put it in the right place, you need to eventually put it embeddings, you need to put it into knowledge maps, you got vector databases, you got all that, and that’s just running into the model, right? Then the model can draw on it to answer whatever question I may have typed into chat GBT.
18:10
But that becomes more complicated when it’s agent because they are not only drawing on the data architecture, they are actually doing things and then writing back into the data. So it’s sort of an ongoing relationship between the agents and the model, the same way we as humans would save things in there. Well, now it’s doing it. So how do you have it rewrite in and check and make corrections and things like that? Yeah, the whole problem becomes a big, you know,
18:38
Sort of ball a string very quickly, but here’s kind of what McKinsey said which I thought was solid They said this is a couple steps that matter to start to build sort of strong data high quality data sets things like that You know number one you got it. You got to boil this question down to something usable. So you need to identify certain high-impact workflows That you’re going to identify You know the whole company’s too big you got it you got a
19:07
boil this down to a couple end-to-end workflows that you’re going to try and build this thing so it operates on its own. Yeah, that’s step number one. Unfortunately, that part’s actually pretty easy. If you try and sort of redesign everything at once, no, this is way too complicated. So people like to look at marketing. That’s a big one.
19:34
They like to look at sort of let’s say knowledge management. Let’s put all of our information in one gigantic thing that’s updated and what, you can, that’s usually where people start. But, you know, as I’ve said before, you can sort of map out the key workflows end to end. And then you can, against each step of the workflow, you can sort of tag it with one of four levels. It can be traditional deterministic software.
20:02
It can be more probability-based traditional machine learning. It can be generative AI or it can be agents. And you can sort of task each type of compute because really those are four different things against the various steps of the workflow. So that’s kind of what you’re looking to do. So, okay, fine. That’s step one, solid. Step two, this is the big one, which is really most of what I’m talking about, which is you modernize the data architectures. Okay, modernize, upgrade, rebuild.
20:32
But what you don’t want to do is start over. I used to think this was going to be the way you would do it. You would rebuild the entire tech stack because it’s so different. in retrospect, that’s a mistake. You want to start with what you have. So you look at your current data architecture and you take pieces of it and you start to rebuild there as much as you can with AI and sort
21:00
I would almost say it like a Frankenstein approach. Data architecture by architecture, you rebuild bit by bit to make it uh work for agents rather than doing the whole thing. I’m not sure about that. I haven’t seen this done enough at scale to really be confident in the best approach because you don’t know. Maybe that sounds like the good approach, but then two years down the road, you’re like, oh, that was a mistake. We should have started from scratch. Unclear, but that’s kind of my…
21:30
Current thinking. Okay, so what is the architecture? You got the data source layer. This is everything coming in. Let’s say customer views, customer wish list, customer purchase history for doing omnichannel. uh Interactions. got the basic incoming data ingestion, the data source layer. I’m talking e-commerce in this situation. Fine.
21:57
Well, keep in mind some of that is quantitative, but a lot of is sort of unstructured data, images, videos, calls, audio. Okay. You got to ingest it all. You got to transform it. You got to sort of recombine it and put it into something that gets it closer to data ready. I’m sorry, not data ready, AI ready. You’ll hear this term all the time. AI ready data, AI ready data, uh quality checks, security controls.
22:26
Things like that. Fine. uh Then you get to labeling, cleaning it. Then you get to this interesting question, which I’ll talk about in point two, which is this idea of adding business context. The idea that you can’t just give LLMs numbers and revenue. You have to give them business understanding. So that used to be called the semantic layer. Now they’re calling it the context layer. I’ll talk about that later, but we’ll call that level one. Level two.
22:55
We get to the data platform later. That’s where this all connects to everything. And it all becomes usable for applications, for AI models, pretty much everything. this system, this is not sending a query to your database and getting results and then making a change to an Excel and saving. This is real time ongoing interaction, continuously. Like this is the nervous system that
23:24
you know, lets all your agents work together. Now, if we’re talking about, let’s say, omni-channel e-commerce, okay. You know, we need all the agents to be able to look in and understand the customer preferences, the interaction history, the transaction status. That’s all got to sort of be accessible all the time to all the agents so the interactions matter.
23:48
But then you get to sort of the embedding services and you get to the vector databases where you start to turn all this unstructured data, even though it’s been cleaned, you start to give it meaning, which is what vector-deactivated databases do. You know, they put knowledge in data by putting things together in close proximity in the vector database. So that starts to get you meaning. Rather than just searching by keyword, you can search by meaning. Very important. And then…
24:17
You can take that up to the next level, which is, okay, we can see how certain agents could access this, take the data, put it into various LLMs, look at the business context and, you know, make, take an action, take a decision. The next level up would be, okay, can multiple agents work on this data platform layer together? Is this where they’re going to be coordinating or are they going to be coordinating one level above that, which is the apps might be?
24:47
Now, within there, I kind of skipped over this guy, idea of the semantic layer. There’s an Andreessen Horowitz article that came out recently about the semantic layer, or not the semantic, the context layer. used to be called the semantic layer, where you sort of combine raw data with business knowledge and give it some understanding. I’ll talk about that later, but that’s also kind of right in that same layer. Also within this layer, you get into the idea of data products that
25:16
You’re basically with all this data layer, you’re ultimately building a product that various people, AI and agents can access to do whatever they need to do. That’s kind of where the rubber hits the road and all this is data products. Okay. And then on top of that, we get sort of data consumption and we get apps and applications workflows, that sort of thing. So can kind of see where the problems are in that layer. Okay. Ingesting the data source layer, difficult.
25:46
complicated, not overwhelming. And you could see how your solution should be able to scale. Okay, the data platform layer, again, okay, vector databases, we know how to build those things like that. But then we start to get to the interaction with the agents, we get agent to agent interactions, we start to get data products. Okay, fine, we can see the layers. How do you get to that top layer such that your architecture
26:13
is producing consistent high quality data that your agents can access. That to me is the key KPI here. How does all that, how can one company do these activities versus another company and the resulting quality of your data set or data architecture is much better than a competitor? Because then all your economics or a lot of your economics should look different. So that to me is the KPI I’m trying to put on those multiple layers I just sort of went through.
26:43
is that quality assessment. So think of how that would compare within two companies. One is higher quality, one is lower company. And now think about the scaling question. If both of those companies go up in size by a factor of four, who is going to have more trouble? The lower quality data company is going to have dramatically more trouble when you try to scale this thing up. So it’s almost like you got to get that right as the kernel.
27:10
and then life gets easier. But if you don’t get that sort quality data set at that level, then you can see how life is just going to get more and more difficult. Anyways, that’s kind how I’m thinking about this. And I’m trying to think of how this plays out between various companies. Now, there is a third step that McKinsey lays out here, which is the idea of enforcing data quality. Yeah, you need feedback. You need continuous cleanup. One of the interesting things to think about this here is
27:37
Okay, everyone knows how you add data to the database. What about removing data from the database? Does it just stay there forever? Not everything’s going to get rewritten. Is this like a tree that’s going to continually grow and grow and never get pruned? So there’s also this idea of eliminating data and paring it down over time. One, you need feedback to increase quality, but two, this thing is going to sprawl. So yeah.
28:04
Enforcing data quality, sort of a feedback loop matters. I think that could be a big component for how one company gets a high quality data set versus another is that feedback loop to enforce quality. And that’s pretty much where this article ends. I’m summarizing it, obviously. They do talk a little bit about how operating models are going to evolve. It’s going to be human.
28:31
agent hybrid environments. It’s going to be agent first environments. I don’t think the thinking is awesome. I like my articles better, but I usually like my articles better. Okay, so that’s kind of point number one within all this. How do you scale AI and then agentic, which is way more complicated? Okay, data quality, data sets. I think that’s the primary factor here. The other point I wanted to talk about, and I’ll finish up here in a sec, is this idea of context is king.
29:01
that everything I just said only takes you so far. You can have high quality data, very well done, very well cleaned, all laid out, and the foundation models that the agents rely on, and we rely on, won’t give you the right answer because it doesn’t understand the context. Data is not enough. It’s just not. You need data plus the context, plus a lot of tokens.
29:30
Now there’s a good article, I’ll give you the link to the article by Andreessen on this. It’s just a quick read. There’s not a lot of depth to it, but I think it raises the right question. And here’s a quote from it. Over the past year, the market has realized that data and analytics agents are essentially useless without the right context. They aren’t able to tease apart vague questions, decipher business definitions, and reason across disparate data effectively.
30:01
Yeah, I think that’s the missing piece. And they say like, you know, this is not their fault. These data tech stacks evolved from a very structured quantitative world, know, basically numbers in a spreadsheet. But that’s not really how we capture business knowledge and understanding. It’s visual, it’s auditory, it’s some numbers. I mean, think about how human beings work. We don’t understand things that way. We put it all together. The context is kind of king. So, you know,
30:30
2024, call that 2024, 2025 as the LLM capabilities dramatically increased. You everyone got excited about agents. Let’s start building these things on top of our existing tech stacks. And yeah, that’s going to be a powerful idea if we can figure out how to scale it up. They call that the frenzy. And yeah, then hitting the wall. Despite the, and this is a quote, despite the initial optimism, it quickly became clear that most of these efforts failed.
30:59
organizations tried to deploy their agents but ran into the wall. MIT famously published their State of AI in Business 2025 report, uh stated that with AI deployments, quote, most fail due to brittle workflows, lack of contextual learning, and misalignment with day-to-day operations. It’s almost like you need human beings. It’s like we’re the middleware.
31:28
Like we are the people that look at data, operate within organizations, talk to each other. We’re kind of context creatures, if you actually think about it. We’re not really databases. That’s not what we are. That’s how I’m starting to think about people is we’re sort of the context middleware. Anyway, so they said, the critical reason the agents didn’t work was the lack of proper data context. And they’ll give an example. I’ll just go through it quickly. But read the article. I kind of made my point already.
31:55
They say, let’s crystallize things and break down revenue growth as an example. Say a data agent is constructed within organization. It’s built to leverage foundation models. It’s connected to all the right data sources. The data is good. And it’s hooked up to a real nice UI where you can give it questions. A query comes in. What was the revenue growth last quarter? You would think that would be really simple. For a human, it would be simple. uh
32:27
Does the agent really know how revenue is defined? Does it know what a quarter, three months is defined? Is that business definition hard-coded into the warehouse, into the pipeline? Or then you start getting to the context layer. Okay, let’s say it’s in the context layer. Is the context layer being updated? Is it evolving over time? Or is it some sort of…
32:54
That’s what a semantic layer was. Is it something someone wrote years ago with all the basic definitions and it hasn’t changed?
33:02
I don’t, is the user looking for the run rate? Is it looking for the ARR? Is it normalized? Are we looking at accounting standards? Are we looking at non-accounting standards? I mean, it really, even the most basic of questions, you can see it screw up. I would suggest you can see this all the time. If you play on these apps, which I use all day long, I’m amazed at how some simple things it just can’t get. Sometimes it’s amazing, like summarizes, but other times you ask it the most basic question.
33:32
and it can’t get there. And you’re like, dude, this is so strange that it seems incapable of certain things. So you basically need context. Anyways, that’s kind of the other point. uh I’m not sure if this idea is going to last, this idea of a context layer. Sometimes when people raise these new ideas, the value is in the fact that they’ve raised a problem.
33:56
And so, yeah, that’s the real problem. To me, that’s the two problems. It’s the data architecture, which I talked about, and it’s the context there. I kind of buy the solution to the data when I’m not sure I buy the solution to the context yet. But I think the question is right. I think the problem is right. That’s kind of where I am. Anyways, that’s pretty much it for today. I think that’s a lot of me talking theory. This would be much better if I had some graphs. It’s hard to describe labors just…
34:24
layers talking through them. I’ll write this up in articles. I’m behind this last week with articles, so I’ll send those out shortly. Last week I was in Phuket for Songkran, which was super fun. I’ve never done Songkran in Phuket. I actually didn’t even go to the… I don’t really go to the western side of the island, the Patong area. I might go down to Kata a little bit, but I far prefer the eastern side where Phuket town
34:53
And that’s where we did Songkran in Phuket town, which was fantastic. It was super fun. My standard was always to sort of go to Chiang Mai, but yeah, I really enjoyed that a lot. So yeah, that was my last week. So anyways, I’m also behind on a podcast because of that. I’ll catch up this week. Yeah, but it’s been a pretty fantastic week and I’m getting on the road again here pretty shortly. yeah. Anyways, think about these. If you have any thoughts on how to think about this, I’d appreciate it.
35:21
beaten my head against these four to five problems. One of the problems with all of this, at least for me, when I look at digital strategy, is quite straightforward because I can look at 20 years’ worth of company history. People have been doing digital transformation, digital strategy since the late 80s. But all this generative AI and agent stuff, it’s a lot of guessing and trying to put it together and looking at pilots. It’s hard to put it together. This is actually why I really enjoy looking at
35:51
Alibaba, Tencent, Huawei, Baidu, because these AI cloud companies are actually deploying these things into companies. So, you can actually see it work in practice rather than just some media announcement out of Silicon Valley for a company that may not be around in two years. So, I actually like AI cloud as a way to get at this question. But yeah, I’m lacking the long history of what matters over time. So, it’s a bit challenging. I think I’m.
36:18
I don’t know if I’ve got the answer. I think I’ve got the right problems. And yeah, we’ll see. Anyways, that is it for me today. I hope that is helpful and I will talk to you next week. Bye bye.
——–
I am a consultant and keynote speaker on how to increase digital growth and strengthen digital AI moats.
I am the founder of TechMoat Consulting, a consulting firm specialized in how to increase digital growth and strengthen digital AI moats. Get in touch here.
I write about digital growth and digital AI strategy. With 3 best selling books and +2.9M followers on LinkedIn. You can read my writing at the free email below.
Or read my Moats and Marathons book series, a framework for building and measuring competitive advantages in digital businesses.
Note: This content (articles, podcasts, website info) is not investment advice. The information and opinions from me and any guests may be incorrect. The numbers and information may be wrong. The views expressed may no longer be relevant or accurate. Investing is risky. Do your own research.