Cyber Wars 2024: Sovereign AI, Crypto & Cybersecurity

Guests:
Ram Ahluwalia & Dr. Adel Elmessiry
Date:
03/06/24

Thank you for Listening this Episode!

Support our podcast by spreading the word to new listeners. We deeply appreciate your support!

Episode Description

In this episode we chat with Dr. Adel Elmessiry, who is a technologist and a serial entrepreneur with two successful technology companies taken from inception to acquisition.

Episode Transcript

[00:00:00] Welcome to Non Consensus Investing.I'm Ram Ahluwalia, your host and CIO at Lumida Wealth, where we specialize inthe craft of alternative investments. At Lumida, we help guide clients throughthe intricacies of managing substantial wealth so they don't have to shoulderthe burden alone. Through this podcast, we draw back the curtain to reveal thestrategies employed by the best in the business for their high net worthclients so that you too can invest beyond the ordinary.

All right, I'mreally pleased to join Dr. Adele. Al Maseri. So Dr. Adel, how do you go by, bythe way, what's your preference? Yes, Dr. Adel, if I had an extra E, I'd berolling the dough. We'll see what we can do there. Dr. Adel's got a PhD inmachine learning, AI, He's a serial entrepreneur who's built two successfultech companies from inception to acquisition.

He's alsoearly in blockchain. We actually met at the Satoshi Roundtable. So the focus ofthis podcast is [00:01:00] on the intersectionof AI and crypto from an investing lens in an innovation lens and then we'llget into some, AI crypto philosophy. So I want to start off first, doctor,where do you see most of the value capture in AI?

On the onehand, you've got the application layer. Those are the open AIs of the world,perplexity, inflection, and then Google Gemini. And then you've got on theother side, the silicon layer, and you've got all sorts of infrastructure inbetween machine learning ops. Integrators Nvidia component suppliers, you nameit.

So who's gonnacapture the most value in the AI gold rush? So that's a fascinating questionand it starts with the same answer that we had before in the Gold Rush. It'stravel makers, Nvidia is a shovel maker. They are selling shovels and you arejust plugging GPUs into motors that is all good until the [00:02:00] paradigm shifts.

So today, mostof the LLMs use GPUs, basically transformers and so forth in order for them toprocess the data and ingest it, train a model, and then you can use it. Howeverwhat's coming up next is going to be really more interesting because there arealternative models that do not require as much GPU power.

So whathappens then when we've. Change the paradigm, not by increasing more GPUs, butby actually increasing the capability of the algorithm itself. We'll be able ofdoing the exact same thing using CPUs, so it will shift. to a different, usingCPUs. Yes, there is a research coming from Rice University.

ProfessorAnshu is leading that third AI is his company that is doing CPU based AIprocessing. So I know it's And the result is actually very comparable, and ithas different [00:03:00] characteristics thanthe ones done by the Transformers model. See, everything that we know right nowin this wonderful revolution is based on Transformer technology, which requiresa lot of GPU because it's computationally intensive.

However, ifyou can achieve the same results without needing to do the same computation,then you get a totally different game board, right? So CPUs rely on sequentialoperations, GPUs have parallel operations, and the parallel operations are whatenable you to unlock speed. So I don't know if you can double click a littlebit more around the idea that CPUs can enable.

AIcomputation. It looks like it's in research stage right now. We're very faraway from commercial stage. The transformer paper at Google, I believe waspublished in 2015. And here we are nine years later. Before you had GPT launch.So [00:04:00] basically you are right. The GPUsdepend on an array of processing units, right?

So if yourproblem is highly paralyzed, that allows itself to, to be paralyzed on multipleGPUs or multiple CPUs, and that actually works. The other thing that you needto think about is the time for processing. A sequential CPU will be able ofcomputing the same problem and arriving to the same results, except it willtake longer time, because you have to do a thousand calculations, and it doesone calculation, say, per microsecond.

versus the GPUwhich can do all thousand at the same time if it has a thousand array, if ithas a hundred it takes ten cycles. But what if You don't have to go through theentire thousand to compute them. That's the paradigm shift that you will beable of just focusing on the areas that you wanted to compute.

Let me giveyou a quick analogy. It's not accurate, [00:05:00]but it will get the story home. So there is something called pruning inalgorithms. Meaning if you are playing a game and you have a three and you knowthat you are computing the probability of winning for each branch, and you keepgoing down that calculation, you could Calculate the entire tree all the way tothe leaves.

However, ifyou know that a branch already is gonna lose, you don't have to go down andcompute the rest of it. That becomes waste of time and waste of computationalpower. That's exactly how the newer models are doing it. They are findingclever way to Not to have to compute things that will not be used later on andhence reduce the size of the problem instead of just throwing it to GPUs andCPUs because if all that you know is GPUs then everything becomes a GPUproblem, just send it to bigger and bigger ones.

That makes alot of sense. That's how, say, the chess algorithm, [00:06:00]Stockfish, would play chess. It has a decision tree, it assigns an expectancyscore, and it evaluates whether to go down that decision tree, and of course weknow you get Grandmaster chess play out of these CPUs, but you would also haveto have the kernels, you'd have the coding language itself be compatible withthe execution architecture.

So even ifit's theoretically possible today, these LLMs are built on a parallel computemodel. So the software has to go along for the ride with the hardware. Isn'tthat gonna be a limitation for CPUs? If not, I'm gonna go buy Intel stocktomorrow. , so we're still a couple years away from commercializing all of the stuffat a higher however the, you are looking at the GPUs from the capacity point ofview of the GPU, but down it's basically an array of cpu.

And that'salso when you go to train a model on any of the cloud providers, you selectbetween different GPUs based on their capacity and then their [00:07:00] speed and the memory associated with them.So it goes down to the same fundamental problem that your problem today ishigher parallel because you are tackling a bigger space result space.

If you arereducing the result space from the beginning, you are able of processing thatand there's nothing. We'll preclude the same new algorithm to be ran on GPUs.It's just that you will not need to get a whole farm of GPUs. You can thencompute that on a laptop with a really nice NVIDIA card in it.

That'shappening now, right? AI computers are coming. They're redesigning the laptopand the desktop to have GPU compute, which of course happened in the past withvideo games, but now it's going to become native to the common desktop. I thinkthere's still more on this GPU versus CPU, and we can move past that topic.

GPUs arereally good at matrix calculations. Which is what you're doing when you'rescoring scoring is when you generate a result, obviously, to an input. It's a reallyintriguing point that [00:08:00] you're right.It's good for humanity, lower energy power consumption, if you're right. Givesrelevancy to the old world technology, but, we've got years to go.

But that's anon consensus view, the idea that, CPUs can do AI computation. And we have seenit before, remember when you used to play high end games, you had to have avery high graphic cards in order to compute that's a GPU in essence. But as wedeveloped more and more algorithms the cards did not make that much of adifference.

Yes. You couldgo to 120 frames per second on a really beefy card. But 60, even 30 would lookfine and it reduces the computational need. Thats what's happening. Great. It'sa great point. Lemme ask this. When I look at AI versus the dotcom era, I'mgonna share my screen here in a second.

The dotcom eraobviously had an extraordinary bubble of those extraordinary innovation. Also,and. In the dot com era, we had the rise of [00:09:00]email, the rise of the browser, Netscape Navigator. We had chat. We also hadthe rise of e commerce newsgroups. And if I look at where we are today, we havethe ability to do coding.

You can get a1x engineer, make them a 10x engineer. You can do reading, drafting, andsummary. You get like the mini intern who does research analysis, but we don'thave the same kind of innovation that we saw in the dot com era thattransformed day to day life. That's the key standard. Day to day life forgrowth swaths of the population.

I think COVIDbrought in grandmas and grandpas that weren't on Amazon yet. That was the lastmile. They're here now. But, we, I don't know, I believe that the, GPT usage isdeclining over time. I think we're set up for disillusionment on the front end.On the back end, we're seeing transformation of the call center that's takingplace.

We're seeingfirms like Klarna reinvent customer service. And I believe the promise of that.I [00:10:00] believe the promise on the frontend, but I don't believe that it's happening anytime soon. What's your take onthat? Am I too pessimistic or am I being realistic? No. They're actually likethat. The truth in my view is in, in between.

So every timewe invent something new, we don't know what to do with it. So we like quicklygo to the simplest utilization from it. So invented the internet andimmediately you start putting, like different images on it and all of that wasnot really the use for it.

But it was thequickest thing that was super nice, super easy to use at that point in time.Right now we are in evolution discovery mode, in my opinion, meaning that youreally look at the invention of the automobile. People had other modes oftransportation before, but it took longer. Now we have automobiles that aremuch easier.

We can storethem in the garage, does not require us to feed them and all of that stuff. And[00:11:00] yes, it killed a whole industry ofthe blacksmith and all that stuff. But then you've got a tool and got peopleare capable of driving to town, living in the suburbs and so forth. Same thinghappened with the airplane.

Same thing.Happened with the internet that, phone companies not really the internet. Theyused to cut the circuit when they detected you are using a modem, but then itswitched. Now we are talking over TCP IP. So that's the incumbent inversiontheory that you always Start with incumbent being the ground and then it flipsright now we're looking for that next tool, what will the AI actually brings usas value.

Be when itjust came out and you are able of talking to it and it's replying to you, thatwas, wow, nice, great. But the early inventions of. of much rudimentary modelsway back, I think it was in the 80s or something, they had ELSA and a coupleother ones that you could talk with and it would actually reply the same way,but now it's on a [00:12:00] Larger scale.

Oh, the EMAC.The original attempt to beat the Turing test. That's right. It didn't beat theTuring test though, which is a simulation of human behavior. Yes. What will happennext is we will have to find the best tools to use. Those will translate intocompanies that actually figured out, you talked about customer service, but Ithink about personal assistant.

Where youdon't have to figure out where you are going, all of the stuff, it tells youwhere you're going. Let me challenge you in this. So like the scope ofopportunity I think is pretty clear and we're gonna double click on sovereignAI and government use cases of ai, which I don't think are getting enough attention.

I know you'refocused on that as well. Everyone can see the dream of AI and the consumer. AIapplication, the enterprise application. I think that's been discussed. Let mechallenge you on this. I would say this, like there are two branches of thetree. Yes, AI iteration will happen. One branch of the tree is that it's anengineering problem primarily, which means we increase the context window, 10,000 tokens, [00:13:00] so that the AI caningest a book and give you a good response.

Number two,you threw more GPUs at it. You train on more data. Those are engineeringproblems. Yes, there's some diminishing marginal returns, but you just throwmore stuff at it. And that is within the levers that we have today. The other,and you train it on the Library of Congress, the other is that no, we need aresearch breakthrough.

The equivalentof what Google did with PageRank. Google is the 18th search engine on the sceneafter Lycos, Excite, Alta Vista, Yahoo, and others that have gone through thedustbin of history, but Google had a breakthrough in engineering that took ahuman to conceive of, and my sense And I'm not a, engineer or an AI researcher,so the intuition I have is that we need a research breakthrough to truly unlockthe wondrous and expansion of human experiences that we can [00:14:00] have with AI.

So you areactually getting to a really good point, but what I want to expand on it is weneed to figure out the value. The value of what we are trying to do today,what's the value in AI replying to you and holding a conversation might be anovelty, so it's not that valuable, but if it can help you rewrite your essayor something like that, now you're driving a little bit more of value, if itcan help you figure out how to do that.

Your vacationor what you want to do. It knows that you like this kind of food. You like thebeach, you like this and you the range where you want to usually buy. So thingsthat you used to take up a human's time before to understand you and then bookthe vacation for you. Now the AI can translate that into Queries that goes onthe web and then uses any of the booking systems out there to book for you.

That's what Icall the actual value of what you're creating. Another company that I know thatis developing virtual cure for autism using [00:15:00]AI and As a guide. So basically today, if you have an autistic kid, which I do,autism makes your memory not as good as normal people. You have to repeat somany times before it sticks.

So they needwhat I call muscle memory for repetition. And humans suck at repetition, but acomputer is perfect at it. So creating an AI that can understand where theautistic kid is at. And start to help the kid with the next level of how theymove is a tremendous value added like hundreds, thousands of dollars per kid,if not millions.

Look I agreethose values, especially in these specialized and niche vertical use cases,right? Legal GPT, Bloomberg GPT, you name it radiology GPT, finance GPT iscoming. The exciting unlock though, and this is what Sam Maltman's talkingabout. Dubai in the desert is just a [00:16:00]leveling up of generalized AI, and we'll get to AGI in a moment, but today AIis at a state where it's like an intern.

It needssupervision, you need to check the results those narrow verticalizedapplications do work very well though. So that's great. We're getting theproductivity improvements from that, but not getting the market populationbenefit. Yeah. You're saying the same thing. So you're saying it's an intern,but imagine like AI used to be a five-year-old kid.

Now in myassessment, it's about a nine to 10 year old kid. Try to take that and stickhim or her in an office as an intern. They know of what you are talking about,but they are not domain specific in what you are doing. What I think willhappen over the next just couple of years is more of a domain specific AI.

You'refamiliar with REG, right? Retrieval Augmented Generative Pre training. DREGit's something that I have been researching, which is basically domainretrieval augmented [00:17:00] generative pretraining AI. So in that aspect it's more focused on a certain domain. It mightbe a domain of knowledge.

It might be adomain of a company. So it will be more focused. I think we'll get those infirst. Which will make life easier for you to ask about specific things likehey, where should I eat? It knows Nashville, knows everything around it, sogive answers quickly for me. If I ask about something like how much will I payfor copayment?