Insider's Guide to Energy

195 - How AI is Revolutionizing the Energy Industry: Insights from Enverus

Chris Sass, Jeff McAulay, Akash Sharma Season 4 Episode 195

Discover how generative AI is transforming the energy industry in this insightful episode of the Insider's Guide to Energy podcast. Hosts Chris Sass and Jeff McAulay sit down with Akash Sharma, Director of Product Innovation & Management at Enverus, to explore the impact of AI on energy data processing, decision-making, and operational efficiency. From accelerating analysis to improving business intelligence, learn how AI is driving innovation in oil, gas, and renewable energy sectors. 

Sharma dives deep into the role of generative AI, data analysis, and multi-agent frameworks, explaining how companies like Enverus are leveraging these technologies to enhance productivity and streamline decision-making. From AI-driven chatbots to advanced retrieval-augmented generation (RAG) systems, discover how enterprises can better utilize structured and unstructured data for faster, more accurate insights. 

Listen as the discussion unfolds around real-world AI applications, from predictive analytics in energy trading to geospatial analysis for infrastructure planning. Get an inside look at the future of AI in energy and how this groundbreaking technology is poised to change the landscape of energy operations and the energy transition itself.

We were pleased to host: https://www.linkedin.com/in/aksharma92/

Visit our website:
https://insidersguidetoenergy.com/


00:00:00 Amash Sharma 

I believe generative AI has the ability to transform the way we operate and process information and data across the energy industry. It has the ability to accelerate the rate at which we do analysis and make decisions in an informed manner, and in doing so, accelerate not just the evolution of the industry, but the energy. 

00:00:20 Amash Sharma 

Transition. 

00:00:25 Chris Sass 

Your trusted source for information on the energy transition. This is the insiders guide to Energy podcast. 

00:00:37 Chris Sass 

Welcome to another edition of the Insiders Guide to Energy. I'm your host Chris Sass, and with me, as always, is Jeff McCauley, our co-host, Jeff. This week we have Akash Sharma with us. It's going to be an interesting show. 

00:00:46 Chris Sass 

We're going. 

00:00:46 Jeff McAulay 

To talk AI akosh, welcome to the show. We're very excited to hear more about Enviros. 

00:00:52 Jeff McAulay 

And AI impacting the energy industry first, just to start us off, can you tell us a little bit about your role and what you're doing at envera? 

00:01:01 Amash Sharma 

Absolutely. First of all, thank you so much, Chris and Jeff for having me on the show. As somebody who's listened to quite a few of your episodes, I'll look forward to the opportunity to discuss and talk about this really interesting and exciting, you know, position we find ourselves. And so a little bit about the company. First, I worked for an organization called Everus. 

00:01:21 Amash Sharma 

It started in 1999 in a variety of different iterations. 

00:01:27 Amash Sharma 

And as of today, it is the largest and amongst the fastest growing energy focused SaaS companies in the world, right. We pride ourselves in providing end to end data analytics and solutions targeted specifically and the energy industry so spanning from oil and gas to renewables to power as well as back office operations. 

00:01:46 Amash Sharma 

And you know, consulting and data science in that. 

00:01:49 Amash Sharma 

Uh, I personally. Uh, my role officially is the director of product innovation. Uh, which is a lot of fancy word to say. I get to work with teams across the entire organization to find where new technologies and innovative solutions can provide the most value. So over the last decade or so I've worked with. 

00:02:09 Amash Sharma 

You know, different teams, different product parts of our organization to look at new trends, emerging technologies and see how they could fit into solving incumbent challenges or sometimes discover new step chain solutions. 

00:02:25 Chris Sass 

Recently there's been a lot of talk with AI. I I started the show by introducing you and saying we're going to talk about AI and and you said you get to look at the technology and where it applies to the business. I I guess it would make sense is to start out. 

00:02:40 Chris Sass 

What kind of AI are we talking about? Are we talking about the generative AI that we talked about? And what's the impact that it could have to the business? 

00:02:46 Amash Sharma 

I I think that that is a great question. I think more often than not, when I talk to peers and customers, there's a big confusion about generative AI or AI. And like all these things where. 

00:02:56 Amash Sharma 

It's uh, I think AI as an application in the energy space has existed for decades, right? None of this is new. The first neural networks, which are the foundational blocks of what is now generative AI, was written in the 1960s, right. I think there are a few very fundamental. 

00:03:16 Amash Sharma 

Market events that happened in the last few years that have led to the revolution we're seeing right now and we can talk more about those as well. But a core focus on what my team is working on right now in our organization is the generative side of AI. There is a bigger team that works on other. 

00:03:34 Amash Sharma 

And uh, you know, it's it's almost interesting to call something standard AI, but traditional technologies and innovations. But my team specifically is looking at is generative AI something that can be looked at from an industry and enterprise application perspective. 

00:03:48 Jeff McAulay 

That's great. And what kind of data sets are you analyzing? There's a lot of data out there. I think everybody is now familiar with the chat bots text based analysis looking at PD. 

00:03:59 Jeff McAulay 

Jeffs is that one of the primary applications or what types of data sets are you actually pulling into? 

00:04:04 Jeff McAulay 

These models. 

00:04:07 Amash Sharma 

Absolutely. Yeah. So you know, when we first started looking at generative AI and its application, it started off with, as you mentioned, you know, the chat bots and chat GPTS as this, how can we leverage this information to support? 

00:04:21 Amash Sharma 

In those sort of applications, because with the sort of widespread usage of ChatGPT, that sort of became the incumbent way of processing these details. 

00:04:31 Amash Sharma 

But when we start to look at it from an enterprise perspective, we realize that the potential of generative AI is actually far more outreaching than as a chat bot itself. Having said that, some of the early work we are doing is in that area because, you know, in technology, as I'm sure in other industries as well, there is a concept of minimum viable product. 

00:04:50 Amash Sharma 

So you always start with something that can define core value and then you expand from there. 

00:04:55 Amash Sharma 

But from a data set perspective, we are now in operating in a space where we're dealing with large quantities of unstructured and structured data as well as you know, geometrical information, relational databases, multimodal images, graphical information, all of those right there is varying degrees of complexity. 

00:05:16 Amash Sharma 

And varying degrees of success that comes with each of those data sets. As you can imagine. But the framework and the technology itself is capable of handling all of those types of data. 

00:05:27 Jeff McAulay 

When you talk about model, there's a lot of there's proprietary models mentioned open AI, there's open source model. 

00:05:34 Jeff McAulay 

Else llama or some of the other ones. So are you trying to train new models? Are you building customization layers on top of the existing ones or working with open source? Tell us about that tech stack. 

00:05:49 Amash Sharma 

Absolutely. Yeah. So. 

00:05:52 Amash Sharma 

I think when you look at all these different models that exist out there. 

00:05:56 Amash Sharma 

That there are sort of four or five different ways you can look at implementing generative AI in your industry, in your space, right? As you mentioned your custom models, whether it's open AI or anthropic or Lama, they're getting increasingly close to each other right when they start off, there's probably a bigger gap, but they're increasingly becoming the Olympic 100m. 

00:06:16 Amash Sharma 

Final uh photo finish type models. 

00:06:19 Amash Sharma 

Uh, so that is a core part of our sort of tech stack that we do use. We are also looking at building foundational models from scratch. Of course they have to be very careful with their use cases, as you can imagine, they're very, very compute and cost prohibitive when it comes to building those. But there are certain use cases where that makes sense and. 

00:06:39 Amash Sharma 

Fine tuning is is also something that we're looking at on a case by case basis. 

00:06:43 Amash Sharma 

Another thing that's starting to get talked about a lot, which has become a core foundation on how we are looking at these systems has got to do with the agent tech frameworks, right? So how do you take a particular model and data set combination and wrap it into an agentic container and how do you chain? 

00:07:03 Amash Sharma 

Enough of these tools and agents together to create a truly. 

00:07:09 Amash Sharma 

A truly valuable workflow from a business and enterprise perspective. So we've got different initiatives in each of these buckets. But we started off of course, the first one was using a foundation model, building a rag framework around it and seeing what value can be brought from that. 

00:07:26 Jeff McAulay 

So let's get specific about an an application here. Because I'm at the level of throw a PDF into a custom GPT and ask the document question, so that's out there. That's easy, that's free or near free. That's not what you're talking about. What what specific applications or you talked about an agentic framework, what is that agent doing and who are they doing it for? 

00:07:49 Amash Sharma 

Sure. So I want to start off with actually if if you don't mind with the example that you just gave right, which is you take a PDF, you drop it into a model and you see the response you get. 

00:07:58 Amash Sharma 

That is a really that is a great workflow. I like that workflow because it allows us to unlock the potential of unstructured data. That was usually hard to capture. We would convert unstructured data into structured and you know those sort of tables to get. 

00:08:13 Amash Sharma 

This first step that goes from that sort of a workflow to next would be actually scaling out rag itself, right retrieval, augmented generation for generative AI is what regression is to machine learning models, right? What what I mean to say by that it's pretty easy to put together the first model, but to truly build a good strong. 

00:08:34 Amash Sharma 

Regression. 

00:08:35 Amash Sharma 

Model as well as a truly strong, reliable and scalable rag is just as hard. So as an example, the first rack that we built was a product that is now out there called instant analyst and it is. It was the first generative AI based product out in the energy industry and one of the very few that have truly existed. 

00:08:55 Amash Sharma 

On an enterprise level, so far the primary reason behind that is if you want to expose something externally outside of your organization, there's a lot of data orchestration that needs. 

00:09:05 Amash Sharma 

To happen because in your use case when you ask the model to give answers from one PDF, it's relatively straightforward because the model will vectorize that particular PDF. But if you're asking it to answer a question from let's say a repository of 150,000 documents, that gets really tricky, right? Because now you're in the millions and millions of tokens. 

00:09:26 Amash Sharma 

Finding relevant for your question becomes complicated, and if the objective of these models is to assist you in decision making and helping you make the best decisions faster than creating an orchestration system that makes sure that the most relevant information actually percolates to the top. 

00:09:43 Amash Sharma 

Can be really difficult, so advanced rag is an example of a workflow that we have done a pretty good job at with the instant analyst product and has seen really, really strong results, but that that's one example the workflow. 

00:09:57 Jeff McAulay 

Can you be more specific? 

00:09:58 Chris Sass 

How? How does someone hold on Jeff? How? How does someone interact with instant analyst? So let's get a little bit more into, I mean the name sounds like, you know, I click on something and it all happens really easily. 

00:09:59 

Yeah. 

00:10:08 

Really. 

00:10:09 Chris Sass 

How does my organization work with it and what? 

00:10:12 Chris Sass 

Does it do for us? 

00:10:12 Amash Sharma 

Absolutely. So instant analyst is was being our first product. As I mentioned, we started off with the paradigm that already exists. So it is a chat bot that is connected to about 100 and 5000 and 60,000 research reports that are SEC verified research. Arm has written, right. So they write research about Energy Trends, energy. 

00:10:32 Amash Sharma 

Equities across the board and this model connects to that. Your method of interaction with it is a chat bot that either exists within our platform or as an app that's out there on iOS. 

00:10:43 Amash Sharma 

This and Android and all you go in there is ask questions and you get answers. Now where we delineate ourselves from you could say like hey, I could ask the same question to ChatGPT, right. I could ask the same question to Gemini, where instant analysts truly delineates ourselves is a by quality. I am not sourcing the information from the Internet. 

00:11:04 Amash Sharma 

Right where really high quality research could be weighted equally as somebody's. 

00:11:09 Amash Sharma 

Blog about that subject. 

00:11:10 Amash Sharma 

Right. And secondly, by minimizing hallucinations, there's a huge amount of focus in our process to minimize hallucinations because there's a much smaller tolerance in hallucinations and errors when it comes to B2B applications versus AB2C applications. Like, I would be fine with my cake recipe from ChatGPT. 

00:11:30 Amash Sharma 

One egg off, but if it gives me wrong metrics for production numbers, uh, that's a whole different paradigm, especially if I'm making business decisions on that. 

00:11:40 Chris Sass 

OK, so so you you you have a a contained data set or a better data set, you've got some safeguards or safety rails to do you do you risk some hallucinations and from what you said that sounds like the one dot O version of the code. So that was your first product let's turn the clock ahead to 2024 where we sit today where is the? 

00:12:00 Amash Sharma 

Out today. 

00:12:01 Amash Sharma 

So the industry at today is so the areas we're investigating is what Jeff brought up earlier was the agentic frameworks, right? Like so you have the one dot O product you want to continue to get that one dot O to 1.1 and 1.2 and that effort is going on. But what's the 2.0 solution? 

00:12:17 Amash Sharma 

So the 2.0 solution is a multi agentic framework where we believe that the way we are designing these things will fundamentally change how a user interacts with business intelligence products and suites. So what that means is, as an example, if you're trying to figure out some geospatial analysis of, you know, wells in a particular area. 

00:12:38 Amash Sharma 

Or parcels in a particular area that meets. 

00:12:40 Amash Sharma 

Criteria as a user. As an engineer, you do not have to speak the language of computers. You can code, so to speak, in English and give it your business instructions. The instruction that you give it gets broken down into specific tasks, and then those tasks are passed on to specific agents, and this is the power of the multi agent. 

00:13:01 Amash Sharma 

Framework in which each agent is essentially its own LLM and data source model. So imagine if I'm asking you know, give me all the parcels from a particular substation or lymph node that meets certain. 

00:13:14 Amash Sharma 

Criteria. Then one agents job is to figure out where is this lymph node right? Second agent's job is to figure out what does this radius mean? Third agent's job is to figure out what parcels contain this right. Another agent filters that. So as you chain enough of these processes together, it allows you as the user, to focus on the decision making involved in building and developing these projects. 

00:13:35 Amash Sharma 

And less about how do all these pieces orchestrate? How do I manage these datasets? What are the different IDs and systems that connect to each other? That is, the general thesis of the two pointer project and that's what we are working on right now. 

00:13:50 Jeff McAulay 

Akash, how do you deal or how does the system deal with conflicting information? Either things that are out of date or cross purposes. So in the instant analyst example it could be we've got the last 30 years of EIA projections. Everybody knows that those are famously off by significant amounts. 

00:14:09 Jeff McAulay 

Does the system in the you know instant analyst example understand to weight more recent data more strongly than old data? And then similarly it sounds like you're getting into a. 

00:14:22 Jeff McAulay 

Infrastructure, citing example? Well, one agent says, yeah, this is a great parcel because it's got high LMP and the other agent says no, it's a terrible parcel because it's in a floodplain and they disagree. How do you resolve conflict between agents? 

00:14:38 Amash Sharma 

Yeah. 

00:14:39 Amash Sharma 

Absolute, that is such a great question because that has been such a core part of when it comes to information extraction to what is delivered, right? So let me talk about the date part first and then I'll talk about. 

00:14:52 Amash Sharma 

The other one. So when it comes on date understanding the different dates and conflicting information, that is almost where that is. The part of the rag system where what's the difference between a basic rack and an advanced rack, right. Those are the retrieval and processing systems that we've built that allows the model to understand these things. So I'll give you an example of how you could build this as. 

00:15:13 Amash Sharma 

Uh, you could tell. 

00:15:15 Amash Sharma 

The model that you will weight newer forecasts better than older forecasts unless specified by the user, right? So unless Jeff, you say hey give compare the forecast that was generated in 2022 versus now and you know maybe there was a Bluebird event in 2023. So you rely on that forecast. 

00:15:36 Amash Sharma 

You can give this specific instruction now in a standard rag. This won't really matter, because all you're doing is a vector similarity search. Or you can come up with a fancier search algorithm, but it's basically a similarity index. 

00:15:47 Amash Sharma 

But in an advanced rack system, you can again break this thing out into specific instructions, right, and give the model very specific instructions on what to do with what data set, right. And so with these instructions in mind, we. 

00:16:01 Amash Sharma 

In our instant analyst product, we tend to prioritize newer information more, especially when it comes to market outlooks. Again, unless specified by the user, when it comes to conflicting informations completely, that gets that gets really interesting. In the example that you mentioned, whether you have an agent that's pulling and it's evaluating on certain criteria. 

00:16:22 Amash Sharma 

And the other agent that's evaluating on another criteria. 

00:16:26 Jeff McAulay 

Uh. 

00:16:28 Amash Sharma 

It it it sounds it's it sounds a little funny, but the answer is actually yet another agent whose job is to do this evaluation right. And we've given it certain evaluation metrics to for the model to not necessarily just feel like you gave me a question and I'm just going to give you an answer regardless, that's not a priority. I would rather the model come back to you Jeff and say. 

00:16:48 Amash Sharma 

Hey I'm getting this type of results with these criteria and this type of results with this criteria. Hey Jeff, you are the expert. Which criteria do you think is better to use? 

00:16:57 Amash Sharma 

Right. And again, all of this interaction that you're doing is in English. It's like working with an intern or a junior analyst. That kind of already knows your data schemas already knows a lot of the logic, but maybe does not have the experience and understanding and the nuance of how these datasets interact with each other. 

00:17:16 Chris Sass 

How much learning takes place? If in that example? 

00:17:19 Chris Sass 

Well, so as does it get to know me, you know, kind of as as I ask, prompts and and and study things. Does it get to know what I'm kind of thinking? Is there any learning on your side or is it always a blank slate when I start? 

00:17:33 Amash Sharma 

Great question again. So the there is this concept of a semantic learning process where you can create a repository of hey, these are the kind of questions Chris likes to ask. And when he usually talks about these things or when Jeff talks about these things you know, prioritize exclusion layers over LMP pricing or whatever, right. 

00:17:53 Amash Sharma 

That is something that we haven't built into our system yet, but that is something that you can. 

00:17:58 Amash Sharma 

Uh, from more so a. 

00:18:01 Amash Sharma 

Ongoing learning process. There are pros and cons associated with it. So the primary reason behind that is like it only works if that version of this enterprise product only is connected to you, right? Because what I don't want is you know 1 user giving so much of specific instructions that the experience. 

00:18:22 Amash Sharma 

Changes for another user. 

00:18:24 Amash Sharma 

Right. And similarly taking that continuous user input as a retraining data set is not necessarily always a good thing. So because it could be you know continuous amount of errors that could be user error or lack of data that getting trained into the model instructions could lead the model to behave weirdly. So the way you we are currently handling that. 

00:18:44 Amash Sharma 

Is we work with customers to understand what are the kind of questions it's doing a good job at and what's the kind of questions? It's not doing a good job at and then we go back and then we create you know additional instructions system prompting if we need to add a fine-tuned node on top of it things. 

00:19:00 Amash Sharma 

That to solve thematic issues, right? But we are looking into creating a semantic add-on where you know Jeff's version knows what Jeff likes to answer, and Chris, your version knows what you like to answer. 

00:19:12 Jeff McAulay 

Can you talk more about how you do QA on something like this? Because we're experimenting with this as well. Great agent go comb through these thousand documents and tell me the following information. We get it. 

00:19:24 Jeff McAulay 

For like. 

00:19:25 Jeff McAulay 

Great. Who wants to check the work of the AI agent to see if it was right or? 

00:19:31 Jeff McAulay 

Because right and it's it's at least nice that you mentioned you're mitigating hallucinations and it can come back and it can show references, but there's still a trust issue there. How do? 

00:19:42 Jeff McAulay 

You deal with that. 

00:19:44 Amash Sharma 

Uh. 

00:19:46 Amash Sharma 

It is not easy to evaluate generative error models like whatever answer I give you. That is a fundamental truth of these, right? It's they're just not easy to evaluate. There is a true cost associated with getting these models to a place and getting these solutions to a place where they're reliable from a trust standpoint. There is also this idea of. 

00:20:05 Amash Sharma 

Being an AI model, right? Like I don't know the stats on the top of my head, but the amount of accidents on US highways every year are astounding. 

00:20:14 Amash Sharma 

But the Tesla Auto drive car crashes once in 100 experiments, and it's like nobody trusts the autopilot again, right? So there is definitely some apprehension about errors, especially when generated by, yeah, the way we are trying to handle it. And again, we don't have the solution yet is it's a lot of effort, rigor and things of that nature. 

00:20:35 Amash Sharma 

But coming up with metrics like you know is this answer grounded in truth, right. So you essentially have to create an entire evaluation framework which can be cost prohibitive because you're asking different models in. 

00:20:48 Amash Sharma 

The Agentic workflows to test these things out, so if you ask a question that contains let's say 20 factoids from 30 documents, you have to then look at that answer and source corpus pair of information as now your test case for your evaluation protocol. So you'll do spot checks on different data points. 

00:21:09 Amash Sharma 

Do these data points exist in both sources and answer? Do they exist within the same context and then? 

00:21:16 Amash Sharma 

You know, do those evaluations and get a scoring metric. 

00:21:19 Amash Sharma 

And then. 

00:21:20 Amash Sharma 

Scale this across so many different questions. One of the things that we've also done that has helped and Jeff might be helpful since you guys mentioned looking at this as well, is looking at it's almost like sensitivity analysis with numerical information, but how much does the answer quality change if I change the questions lightly, right? 

00:21:40 Amash Sharma 

Just just different iterations of the question. What are the areas where you're getting some bleed and quality? Things have practices of that. This is not an exact method or an exact science, but that is something that you know we we continue to work on. 

00:21:54 Chris Sass 

Your answer, though, makes me wonder. It still seems like we're using a human to machine interface as opposed to a machine to machine interface in your examples. So are you seeing these early use cases? Generally a person going into an app, or are you seeing API's from applications making queries? 

00:22:14 Chris Sass 

On behalf of the of workflow or process. 

00:22:18 Amash Sharma 

Yeah. So we, we've, we've seen both of those cases. I think at the end of the day, wherever this information flows to before a decision is made, right, we usually want a human in the loop in that given the non deterministic nature of these models, you do not want them. 

00:22:34 Amash Sharma 

Directly hitting an API and making a decision right? You don't want this models the analysis that comes from these reports to trigger a trade. For example, right like you. You. You don't want that. So what you want is the trader to be able to get answer from 100 research documents in 15 seconds. That answer be as close to the ground truth as possible. 

00:22:53 Amash Sharma 

And the trader to then make the most informed trade possible, right? But we've had some API calls that have to more to do with some of our customers creating their own enterprise solutions. So they'll have. 

00:23:05 Amash Sharma 

You know their proprietary data they'd want to connect to our solution set through an API endpoint, create a more automated chain of events, but there's always human in the loop in any of the test cases we've done so far. 

00:23:15 Chris Sass 

Got it. 

00:23:18 Jeff McAulay 

Akash, I'm wondering if we're heading towards some sort of AI moment that we'll call a a Kasparov moment. Right. So when deep. 

00:23:28 Jeff McAulay 

Who beat Garry Kasparov in chess? This was a highly. 

00:23:32 Jeff McAulay 

Visible, easy to recognize moment when a machine had surpassed a human, and maybe in some applications that has clearly already happened, meaning having it read 100 page PDF and come back with an answer in seconds. OK, so that's already happened. Is there an energy specific application? 

00:23:52 Jeff McAulay 

Where that will be noticeable. Anything you, you you think comes to. 

00:23:57 Amash Sharma 

Mind. Yeah, I think it's. 

00:24:01 Amash Sharma 

The energy industry in the energy space is so multi dimensional and so complicated that it's hard for me to give a Casper of moment. I think there are some big goals that we are going towards that may have as you mentioned, like a couple of these moments may have happened a couple more of them may continue to happen. I think our goal at the end of the day is that. 

00:24:23 Amash Sharma 

If you are somebody in the energy space in any of these energy domains or in the energy transition space, and you're trying to make decisions, you're trying to leverage all these different data sets. 

00:24:35 Amash Sharma 

You are able to make those decisions in the most informed way, in seconds and minutes, not hours and days, right? And I think that level of that level of efficiency and adoption I think would unlock efficiencies for the way we do these transitions and we develop. 

00:24:55 Amash Sharma 

Energy that could be considered as a gasper moment. I apologize. I don't think this is exactly the answer you might be looking for, but you know, I think there might be a lot of these smaller events that lead to an overall sort of integrated solution. 

00:25:08 Chris Sass 

So that brings up the question in my mind of how efficient or how do you know this is bringing value, right? I mean, so I've had analysts all along, they go out and do this research, they come back and you're very confidently saying, look, I can take this huge corpus of data and. 

00:25:23 Chris Sass 

I can give. 

00:25:23 Chris Sass 

You quick answers. So how how do you determine how much more efficiency or how much better? 

00:25:28 Chris Sass 

My organization's doing with this kind of. 

00:25:30 Chris Sass 

A system in place. 

00:25:32 Amash Sharma 

That's a great question. So with the installed product that we have out right now, we've done tests with our groups of users. You know, not all of them, but we've done tests with groups of users. 

00:25:42 Amash Sharma 

And we still did a lot of blind tests early in the development where we would say like, alright, this is this is what I want you to figure out. One team uses, you know the analyst solution. One team uses the just the regular find the report within the repository and we've seen up to 80% improvement in productivity and efficiency in finding these answers. 

00:26:04 Amash Sharma 

Right. So as an example, something that would take 30 minutes in finding through text search, reading, researching, right and then coming to the answer. 

00:26:12 Amash Sharma 

The user is able to get to 80% of the answer in seven seconds and then spends about 5 to 10 minutes. You know, validating, confirming, making sure that the answer is exactly what they're looking for, but that's a huge improvement from 30 minutes, right? And and that those are the kind of tests we've done in the in the current example, right, we'll continue to do these sort of. 

00:26:32 Amash Sharma 

Test cases as more and more came out more and more things come out, but any products that we build in this regard for the energy industry have to meet one of sort of three principles of product design. One is it should make you more productive. 

00:26:45 Amash Sharma 

It should make information and tools more accessible. 

00:26:50 Amash Sharma 

Or it should make things more customizable, right? And so those are the different metrics that we try to capture in one way or the other. And so some of the metrics that I just mentioned. 

00:27:00 Amash Sharma 

Those are more on the productivity side. 

00:27:02 Chris Sass 

Does this risk becoming a crutch for some analysts along the way? Or I mean, if I look at the crowd strike moment where where folks are having trouble getting online, and if I'm doing all my research going? 

00:27:12 Chris Sass 

Through a bot. 

00:27:13 Chris Sass 

Or so to speak or instant analyst. 

00:27:16 Chris Sass 

Is is there fear from some of the customers about the skill sets? 

00:27:20 Amash Sharma 

Think. 

00:27:22 Amash Sharma 

That that's a fairly valid question. I I think it is. It is as much of A crutch as as calculators in Excel has been to the ability to do mental math, right. That has it sort of weakened our ability to do some quick arithmetic in our head possibly. But it is also expanded our ability to do mathematics as a scale that we couldn't have possibly done on a sheet of paper. 

00:27:42 Amash Sharma 

Right. So I think there are trade-offs in that. 

00:27:47 Jeff McAulay 

Gosh, what's maybe around the corner and I I want to highlight because you've hinted at some examples that might be actually tremendous leaps. So we started off talking about text based analysis. I think everybody is comfortable with that. But the things that you were hinting at in terms of citing get into very, very different types of. 

00:28:06 Jeff McAulay 

Documents that might be a FEMA flood map. 

00:28:10 Jeff McAulay 

Or it might be an electrical, you know, transmission and distribution diagram, or land parcels. These are complex diagrams. They're esoteric. They're not text based, they're graphical. You're interpreting what different lines mean. It seems like a huge, huge leap to go from a large language model to. 

00:28:30 Jeff McAulay 

An agent that can read and interpret those types of visual technical documents. 

00:28:38 Amash Sharma 

Absolutely right. It is a massive leap, right? And that in lies why it is one of the more challenging solutions, right? Like as an example, as we are building these solutions, these sort of phase two solutions, if you might building these solutions is building them and having them work a certain way is not the challenging part. It's having them work consistently and repeatably in the way that we want it. 

00:29:02 Amash Sharma 

To is where the challenge. 

00:29:03 Amash Sharma 

It it it? 

00:29:06 Amash Sharma 

It lends repeating these systems are highly non deterministic, but that is their power. If they were extremely deterministic then we could just write. You know, I'm sure I have done that earlier in my career of 500 line case statement to handle every edge case. Those systems work efficiently. They don't really scale right? So you want it to be nondeterministic. 

00:29:26 Amash Sharma 

But you also want it to be deterministic when you wanted to. So as an example, Jeff, the the case study that you mentioned about like how we are looking at these agentic systems. 

00:29:37 Amash Sharma 

So let's say a user asks uh, this is my particular substation and I want to figure out what are the parcels within a certain amount of radius from it right to be able to execute that all the user does is ask that question in English right. First thing the one model has to do is figure out what is the substation, extract the substation name from it and then. 

00:29:57 Amash Sharma 

Find a similar substation or exact search in our substation database. That means understanding text similarity. That means understanding text to SQL and that means understanding again edge cases and handling deterministic nondeterministic solutions. Once you have that. 

00:30:13 Amash Sharma 

Then the model needs to figure out. All right, I have a name. Then there is a separate agent whose job is to build a radius from a particular area. There's a lot of different ways of doing that. If you build an actual radius and do a spatial intersection on the fly, then you might as well ask this question, then go grab a cup of coffee because it's going to take a few minutes. 

00:30:33 Amash Sharma 

We are looking into some solutions like H3 indexing that allows for large spatial indexes to pre exist and so then what you've done is you've translated this question into and you have a substation, you have a radius amount. These are two variables that go into a few. 

00:30:48 Amash Sharma 

Function whose input is lat, long and distance and whose output is a list of parcel IDs that intersect the indexes that fit with that criteria, right? And then this output of this function is fed back to the LLM so and it basically says like hey here is the list that was asked for return this to the user. 

00:31:09 Amash Sharma 

Right, so the language model, if you think about it in this regard, is basically parsing the question into its constituent pieces and just triggering what it thinks is the best agent to do this job. It's actually not finding the. 

00:31:20 Amash Sharma 

For you, the last part that the LLM does in this sort of a workflow is compiles the results of the different agents in a logical manner and says like. All right now I have this new set of information given this lat long given this substation. Given this list of partial IDs here is what I can tell the user. The language model has no context on how those pieces were calculated. 

00:31:41 Amash Sharma 

That's what the agents are. 

00:31:43 Amash Sharma 

But that's also where the challenge and the difficulty in scaling these systems come, is because how do you ensure that the model chooses the right agent all the time? It's like it does a lot of the cases, but where it doesn't, that's where you have to continue to iterate, design and improve. Does does that answer your question, Jeff? 

00:31:59 Jeff McAulay 

It it sounds like you've broken it down into the pieces of the puzzle so that we can. It's maybe not as far off as it might feel, but there are a lot of interstitial points that need to work before that's a reliable application, but to your point it's it's coming one more application that you seem to hint at was energy trading. 

00:32:12 Amash Sharma 

Absolutely. 

00:32:21 Jeff McAulay 

So. 

00:32:22 Jeff McAulay 

Are there also time series energy evaluations? Can these generative models actually do predictive price modeling? Are we going to see more of that in? 

00:32:32 Jeff McAulay 

The. 

00:32:32 Amash Sharma 

Future. So one of the areas where we are using generative models for in time series is actually building the time series model from scratch like it's a foundational model. 

00:32:42 Amash Sharma 

Instead of text, we are just using numerical information, which is great because it makes for a much more smaller and affordable model. It does of course you have the nuances and complications on building the right. 

00:32:56 Amash Sharma 

Uh, when it comes to actually building these models out, I think on the time scales and the frequencies with which energy trading exists, it's going to be harder to build and maintain a model of that scale, right? But when you're looking at longer horizons, for example, if you're looking at a cut load for the next month, if you're looking at Permian production curves for the next year. 

00:33:16 Amash Sharma 

We've done some early tests and the model seems to perform really well. These are pretty massive models that take hours and hours to train, but they are able to build and develop on these sort of, you know, time series models and things of that nature. So. 

00:33:30 Amash Sharma 

Again, going back to that sort of perspective of like, hey, this model can help provide you know. 

00:33:37 Amash Sharma 

Data points and fundamentals and inputs that could help inform your trade, but as the model stands today, they may not exist in the frequency and speed where you may be able to execute trades directly on them. 

00:33:50 Amash Sharma 

The other part of this that I will always mention with any of these models, which I think we hinted upon earlier as well. 

00:33:56 Amash Sharma 

Is Transformers and generative error models are very black boxy, right? It's hard to explain there are. There are sort of approximations and inferences you can do to say like, hey, this is how the model does this. But if you were to specifically ask me, like, hey, Akash, why did this data point predict this as its next month's forecast? 

00:34:16 Amash Sharma 

I can't give you the same level of confidence I would if it was a, let's say, multiple regression or an arena or like, you know, one of those statistical models. So which makes again, there is an apprehension and resistance on using these systems in something that maybe is high decision like trading. 

00:34:34 Chris Sass 

There's a lot going on here and and we're getting close to time, but I I have one kind of last question as we're going along here is. 

00:34:45 Chris Sass 

There there seems to be a lot of disruption and you're moving it fairly quick pace. What what are the next hurdles we need to see or what what are the gates to go through to see the next evolutions you, you and Jeff talked a little bit about the incremental things, some of them done some of them on the near horizon. 

00:35:02 Chris Sass 

What's keeping us from going faster or are we going super fast and we're to see amazing change in the next 12 months? But what are the next gates for this kind of technology? 

00:35:12 Amash Sharma 

I think we, I think we are going really fast, right. I think the rate of innovation of something of some of these things is, is is breakneck. Truly there are functionalities and features we developed when we first started building the six months ago that are now standard packages, part of LLM's that had to be built out, right, things are. 

00:35:29 Amash Sharma 

Having said that, I think as you think about that gap and like what we could do to improve the application of these type of things. 

00:35:37 Amash Sharma 

The models have to get better at multimodal processing, right? They that's just a fundamental requirement. A lot of the challenges that happened right now, and this goes back to 1 of Jeffs earlier question about how do you look at you know complex geometries and things like that. We have to convert those complex geometries into digital numerical format, right? Imagine a world where language models are able. 

00:35:57 Amash Sharma 

To look at that image and process the image. 

00:36:00 Amash Sharma 

The family that would allow us to scale application of these models much more. 

00:36:04 

Faster. 

00:36:06 Amash Sharma 

Also, fine tuning and model retraining needs to get cheaper and faster, right? 

00:36:12 Amash Sharma 

I want to. 

00:36:13 Chris Sass 

So, but does that mean the hardware is not there yet? Is there is is the compute there? 

00:36:15 Amash Sharma 

I I think yeah. Hardware availability. I think the compute is there, but the compute affordability and availability I think needs to change right? With the NVIDIA New Blackwell and new chips that are coming out, which are going to be faster and more efficient, I think those strides would. 

00:36:32 Amash Sharma 

Plus a lot because right now if you think about it, the whole process is that I'm defining you start with the foundational model and then you create systems around it to orchestrate and guidelines and things like that, right? Imagine a world where you could take a smaller model and teach the model itself to perform in a very, very specific way. Every agent has a trained. 

00:36:53 Amash Sharma 

Model specific to its task. 

00:36:56 Amash Sharma 

That is, that is something that will allow us to scale much faster, but it's really, really expensive, right. And until we get to that, I think moving these things forward will be this will be a high risk high. 

00:37:09 

The. 

00:37:10 Amash Sharma 

Reward type of situation. 

00:37:14 Jeff McAulay 

Oh gosh, this has been a a tremendous journey. We covered some very detailed and nuanced energy applications from the accessible nature of of data and PDF's to now multimodal non text documents to thinking about time series and energy data. And you've painted a vision of. 

00:37:34 Jeff McAulay 

Where we can all be using these tools to be more efficient in analyzing energy data. That's very exciting. I'm looking forward to seeing big things from you and and Verus and thank you so much for joining us. 

00:37:46 Jeff McAulay 

On the show today. 

00:37:47 Amash Sharma 

Absolutely. Thank you so much again Jeff and Chris for having me on the show. 

00:37:51 Amash Sharma 

Fun discussion. 

00:37:51 Chris Sass 

For audience, this has been a great episode. I I always love talking technology. I want to hear what's happening in AI. We get a lot of questions on AI, so I hope you found this informative. If you did take a second and hit that like button, go ahead and add a comment. Follow us on YouTube and we'll see you again next time on the insiders guide to energy. Bye for now.