In case you missed the announcement: Alteryx One is here, and so is the Spring Release! Learn more about these new and exciting releases here!

Alter Everything Podcast

A podcast about data science and analytics culture.
Podcast Guide

For a full list of episodes, guests, and topics, check out our episode guide.

Go to Guide
AlteryxMatt
Moderator
Moderator

This week on Alter Everything, we chat with Scott Jones and Treyson Marks from DCG Analytics about the history and misconceptions of AI, the importance of data quality, and how Alteryx can serve as a powerful tool for pre-processing AI data. Topics of this episode include the role of humans in auditing AI outputs and the critical need for curated data to ensure trustworthy results. Through real-world use cases, this episode explores how AI can significantly enhance analytics and decision-making processes in various industries.

 

 

 

 


Panelists

  • Treyson Marks, Managing Partner @ DCG Analytiocs - LinkedIn
  • Scott Jones, Principal analytics consultant @ DCG Analytics - LinkedIn
  • Megan Bowers, Sr. Content Manager @ Alteryx - @MeganBowersLinkedIn

Topics

 

Ep 183 (YT thumbnail).png

 

Transcript

Episode Transcription

Ep 183 Getting Real Business Value from AI

[00:00:00] Join Us at Alteryx Inspire 2025!

[00:00:00] Megan Bowers: Hey, Alter Everything listeners, we wanted to let you know that you can join fellow data lovers, analysts, and innovators at the Alteryx Inspire 2025 conference. It's the analytics event of the year, happening May 12th through the 15th in Las Vegas. Head over to alteryx.com/inspire to register now.

We would love to see you there.

[00:00:18] Introduction to the Podcast and Guests

[00:00:18] Megan Bowers: Welcome to Alter Everything, a podcast about data science and analytics culture. I'm Megan Bowers, and today I am talking with Scott Jones and Treyson Marks from DCG Analytics. 

[00:00:37] Diving into AI: History and Misconceptions

[00:00:37] Megan Bowers: In this episode, we chat about the history of AI, misconceptions about this technology, improving data quality and getting real value out of AI.

Let's get started.

Scott, Treyson, it's great to have you on our show today. Thanks so much for joining. I'd love to start out with some quick introductions. If you could just say a bit about yourself and what you do for work. We'll start with 

Racon. 

[00:01:04] Treyson Marks: Okay. My name's Treyson Marks. I am the managing partner of DCG Analytics. DCG is a consulting firm that works really well.

We have a tax line of business. So we help people with personal and business taxes. And then we, we have a finance line of business, so we help with bookkeeping for a lot of small businesses and nonprofits. And then we have a third line, which is analytics. And in our analytics line of business, we really focus on Alteryx and the whole BI stack and how Alteryx can affect that and help it.

My background, I've been doing Alteryx Consulting now for almost eight years, and I'm in Alteryx, ACE and I just love the community and what it's done for my career, and I just think the platform's really cool. 

[00:01:46] Scott Jones: Scott Jones, I also am with DCG. I'm the principal analytics consultant, so I build workflows and different analytics for our customers.

Also provide consulting and advisory services internal and external. I've been working in analytics now since about 2007. Uh, when I started using Alteryx, and some people may remember me, that I actually spent almost 12 years working at Alteryx and then moved on into the AI space, came back into the fold.

Joining DCG last year, 

[00:02:17] Megan Bowers: Awesome.

 Well, it's great to have you both, I think with your wealth of experience, excited to dive into some AI topics with the idea of breaking it down, talking that not just this high level, but what does AI really mean for the analytics industry. 

[00:02:32] Understanding AI: Definitions and Examples

[00:02:32] Megan Bowers: And I think a good place to start is just kind of level setting with what is AI, how would you describe it and explain it.

We'll start with Scott for this one. 

[00:02:43] Scott Jones: It's an interesting question because I think a lot of people look at AI, especially with the boom that's going on right now and don't really realize that AI has been around for a long time. And I recently read something that I really agreed with, which was a lot of things are called AI, a lot of technology, and as they become the norm and just people are used to them, we stop calling them AI and we talk about the next phase of AI as it moves forward.

People don't think that the search in their browser is AI and. It's really around being able to take information and let a machine or a computer go through that information a lot faster and provide feedback on what it sees. We think about things like open AI and Gemini and all these different text-based solutions, right?

We ask it a question. It does a search and comes back and says, this is what I found with that and gives it to us in a natural language that we're used to hearing. Then that is using what's called large language models, which has been taking information from everywhere it can get it, putting it into.

Code into a model that can be quickly search and retreat, and then taking the things that we ask it and the responses we give to it and continue to add to that training so it becomes more and more human-like in its responses. But on the other side of that, there's also things like image creation and retrieval.

So we see things like, Hey, make me a picture of this person, but with a donkey's body. And so that is an AI going out and searching all these different types of images and recreating that based on the information that it's been fed. And so it all comes down to the data that it's been fed. And then there's other places we're seeing a lot in robotics, AI, and that becomes more of a, we've taught this thing something repetitive.

We're seeing it in the medical industry. They teach it how to do a specific surgery. That's the only information it gets and it does that repetitively. It doesn't learn anymore from what it's doing, but it can do it a lot faster and a lot more accurate based on what it's been fed. A great use case for that is actually my in-laws are all farmers and the harvesters that they use to scoop up the corn when it's ready, they actually plug in a GPS of the directions and drive it around, and now it's remembered that.

From an AI perspective, it's learned, okay, this is the route, this is the speed, this is how much I should be taking before I call the next truck. People will think about that being AI, but that really is right. It's learned that information and become smarter at how to do that. And then you have this other piece which falls into expert systems.

This one, I think we could get a lot more use outta honestly, and that is looking at massive data sets. And looking for data patterns, right? Machine learning, we do this on a small scale all the time. We drop in, do our Python analysis and come back and do neural networking, but a machine can do. Millions and millions of times in loops compared to what a human could do.

That's really around what AI is, right, is being able to do those functions to assist in what we do and do it faster by us continually feeding it more and more information to get better. That's the way I look at AI and what it is in the large ecosystem and there's a lot of other ones and theoretical ones, but I think that's where we use it today.

[00:06:06] Treyson Marks: I like your point you made about how AI's been around for a second. It looks, it seems like it got a rebrand in the last two years because we've been talking about machine learning for like six or seven years now, and really what. AI is probably where it's mostly used as like in like an open AI or like a chat GPT or something like that, right?

Once it became widely available for everybody to use for this fun having a conversation or a quick Google, that's when it seems like it got that AI rebrand. 

[00:06:33] Scott Jones: The earliest AI was back in the early fifties. I 51 52 taught a machine to play checkers, and then you saw the evolution of that. Then it was playing chess masters.

And doing that. And then it played Jeopardy in the long run. And so you saw that prediction of that type where they fed it enough information and keep building it up from what it is. And we can equate that to what we're doing now, right? We're feeding it all of our data saying, oh, these are the important things that you gleaned out of it.

If you think about auto insights from Alteryx, that's what we're doing. We're feeding it information in the cloud and it's coming back saying, Hey, these are the most important pieces of information when you put 'em together. That's likely most important to you because you've told me who your targeting and what your job title is and what kind of company you work for.

So I can put all that together and say, Hey, here's what you should be looking at. And that goes back to, Hey, I'm gonna feed you all the different moves that possible in chess. These are the ones that are most likely to win. And it's doing the same thing, but on a much larger scale. And that's what AI is doing is it's, we're at a place right now where it's scaling so fast and so large because of the availability of things like CPU and GPU to be able to keep processing it.

Plus the amount of people that are just. Feeding it as much information constantly when you think about your browser, so Google search or whatever you use, the more people have searched and the more they've clicked on specific ones, the more it learns what people are likely searching for. And so it's gotten better over time and that's what it's, we're feeding it data.

[00:08:09] Megan Bowers: Yeah. 

[00:08:09] Misconceptions and Realities of AI

[00:08:09] Megan Bowers: So based on what you guys have said and what I've read as well, AI's been around for a long time. We're seeing this just re popularization. It's more accessible now. And this easy chat interface, but I'm wondering what you all see as some misconceptions around AI. And we'll start with on this one.

[00:08:29] Treyson Marks: Yeah. I. The misconceptions that came out early on and, but that it doesn't do everything necessarily. I know like a lot of people are concerned with like how AI is gonna replace them and things of that nature, but really where the current state of AI, it doesn't seem like it isn't, isn't solving the problems that humans.

Can or need are needed to solve? Like prompt engineering is still very important. Being able to know how to ask the question or to kind of debug, like whatever the results are, to like sift through the results to make sure that everything makes sense. A real good example of this, Scott and I are working on a project right now where we're having to do some work in Amazon's Redshift and in order to speed up some of our development process, we're leaning on like open AI and CHATT to help with certain like blocks of code.

We understand how to write like a coalesce function in there, but it's, it's a lot easier to have that write it out for us and let's just go through it. But however, we've, we've hit a roadblock in a specific function that we need to do, and today we're gonna get together and actually do it. So AI's not quite ready to be our overlords yet.

There's still a real need for the human element when working with AI. 

[00:09:43] Scott Jones: I think that's an I important point and I, I don't have the belief that it's ever gonna be necessarily our overlord where we're living in Wally or in the Terminator, either one. When we talk about, okay, it's replacing people and it's not so much we needed to look at it less as a replacement, look at it more as a shift.

So I'm gonna age myself here and do you know what 4 1 1 is? Are you at an age where you had a phone plugged into a wall like I did? And when I don't know a phone number and I don't have a phone book, I dial four one one and I ask it and it charges me 50 cents and connects me to whoever I'm trying to call.

I haven't used 4 1 1 in decades because of something called the internet. Every phone number is out there. So those people that answered those calls, you can say AI replaced them. Really, did they replace them or did it just shift them that we now need somebody who can feed that information to the AI, so hopefully it's creating more modernized positions and less data entry jobs, for lack of a better term, of something basic for people to do.

AI is a tool to make us better at what we're trying to 

[00:10:54] Megan Bowers: do. 

That's definitely my hope. That it's more of a shift. 

[00:10:57] AI in Data and Analytics: Lessons from the Past

[00:10:57] Megan Bowers: When we were talking about this earlier, you guys brought up some interesting points around like history of other data and analytics movements, like the push years ago to go to the cloud or the push about analytics and dashboards before that.

How should we be approaching AI in the context of those other analytics discovery patterns and like not making the same mistakes that maybe the industry has made in the past? 

[00:11:26] Treyson Marks: Yeah. So I will, I will expand a little bit on what you just said there. And then this is a lot of conversation that we have around AI.

Everyone's trying to figure out in every department of everything, like how do we leverage AI to, to do better things? How do we work better with our AI? And then how do we also potentially use AI to solve what are data problems, right? Data's everywhere. It may not be cleaned or whatever, and, and so we've been making these shifts for a long time.

I'd say the first, like major one. Was like moving into data warehousing. And so a lot of people put a lot of money and effort into moving into data warehousing and then it was very expensive. And then all of a sudden, now actually we need to try this like giant blob storage sort of thing, like a Hadoop.

And they put a lot of money into Hadoop and just putting the data up there and it, this isn't to say that it created 

a 

[00:12:11] Scott Jones: lot more problem while we did it. 

[00:12:14] Treyson Marks: Yes. I've seen a lot of efforts where people like put a ton of time and money and effort into those things and then they had to roll 'em back when the next thing came along.

Or once they figured out a better way to do it. And so then there was the, the push for dashboarding and BI and I, this is Red Eye really got into this space was when Tableau was the new hotness and it was, everyone was doing everything in Tableau and then Power BI came out and again, these things didn't solve our data problems.

It. Expanded them or exacerbated them, I think. 'cause it was like, okay, so now there's data everywhere and some organizations, they go in deep and they make a ton of dashboards and then they get this overload of information and there's all this hours and costs put into building all of these things. And then they don't get used.

And then it moved into the cloud and. People just put everything they did in the cloud similar to the way they, the way we did Hadoop, put everything up there. And now I'm starting to read articles about cloud repatriation. So people are like, okay, how do we pull stuff out of the cloud? 'cause maybe we don't need cloud for long-term storage because of the cost.

And, and I think that AI is very much the next example of this, where people are going hardcore into AI and thinking that, oh, it's gonna solve the data. Problems we have. But the problem here, and the same problem that existed with all of those previous new things to the analytics space is that if your data is terrible, going in, putting 'em in this new system isn't exactly gonna solve it.

And probably in worse ways than before because AI, people use AI to help, like with decision making. We just need to make sure that as we go towards this, we are doing a good job at making sure our data's clean and structured and ready to move into this AI. Because ultimately when we need to come to the defense of this tool that we've built, we have to be able like, well, here's all of our steps.

Documentation's super important in everything in our space. And so it's like, okay, so like how did we get to here? And why is it giving us these answers and why? Like the trust that it builds from being able to do. That's really important. 

[00:14:11] Scott Jones: Trust is the key thing there that I was gonna mention from what you just said.

And we have this. History of, let's throw everything in there. Let's throw everything into no SQL plops, let's throw everything into the cloud, whether we're gonna use it or not, because maybe we need it. And that's happening with AI. There's large language models, small language models, these different pieces, but they're trying to train them into as big as they can possibly be.

And we keep saying they, you know, open AI. Okay, we trained against 2 trillion and we trained this much and.

Happens when you don't have quality data going in there and you're just saying, scrape every single website book and listen to every single person and every single voice that goes in there. If you run into a trust problem, because there's a couple things that happens. The first one is bias, right? If enough people.

Correct the model that the sky is red enough people do it. The AI will take that as being the truth because the AI doesn't know the difference between truth and false. It's just taking the information given to it. So if enough people say that when it's gonna come back and save of all of a sudden a million people hit the AI and say, no, no.

One plus one isn't two, it's five. Then eventually it's going to save that back. And what that brings out is something called a hallucination. If we see one plus one equals five, we know as humans to not trust that. And if we have enough cases of trust, it actually takes a lot less cases of mistrust than of trust to gain it.

You can lose it a lot quicker than you can gain it. People won't want to use it, although we're, we may be in a society that doesn't care as much as it used to about things like that, but that's a whole nother podcast. 

[00:15:58] Megan Bowers: Um. It makes me think about the Google AI summaries that they rolled out, and then it was telling people to put glue on their pizza because it was trained on Reddit data that was being sarcastic.

But AI doesn't know sarcasm, but you think about Google is, I think doing more with AI starting to implement even more, and they're trying to avoid those pitfalls. But yeah, I think they did lose some trust there. 

[00:16:19] Scott Jones: So it was Bing and Yahoo and everybody else. So if we have all these hallucinations and you can't trust it, so what's the answer?

And is it feed it? Less information. But I think it's feeded more curated information. And when we say curated information, what we mean is better data. And I'm a data guy. Treyson's a data guy. We have a bias towards the better the data that you put into it, the better it's gonna be, whether it is your dashboard, because that's what happens.

You create a dashboard and you throw everything at it. I can make a dashboard say anything that I want it to say because it has all this information. But if you curate the data to where it can only show the truth. Focused on that, then you're gonna have a more quality piece of analytics to make decisions on.

And so it all comes back to what is an AI being trained on? What is the information that's being fed? So the answer, that's tough, right? There's a whole do we stop letting it listen to people? Who are the people we do let it listen to? How do we get it to stop? Saying false things or dangerous things. So if you think about it, this is being used a lot in medical.

I worked with several hospitals over the last couple years, and they're trying to implement AI to do things like predict diagnoses when somebody walks through the door. Do you want that to be wrong? Some people may even find about, they're doing claims, whether or not they're gonna be denied using an AI.

So if it's being fed incorrect information, now you're costing could cost people money. You could cost them the ability to get the medical care that they need because of a decision that could be biased or hallucinating. So you, these things have real world important implications that we have to consider from the data.

Another one I always think about is fraud detection and finance. And that's a real heavy use case and you feed your entire data set of when. Transactions happened and how often fraud happened, it's probably 0.0, 0, 0, 0 some percent. And finding that needle is incredibly difficult. That's a great use case for AI 'cause it can weed through all that stuff, but it better be right?

Because if you accuse somebody of fraud and you're wrong, how much did that just cost your business? So the importance of getting good data is more important than ever. When we start to go down this path. 

[00:18:46] Treyson Marks: I think to your point, Scott, like we can't remove the human element from it because maybe what AI does really well now is making the initial assumptions and then that assumption is fed to a human and then they do like an audit of the AI results and like making sure that we as the developers or we as the.

Healthcare providers are still taking a look at what's been being told to us and making sure that all of it makes sense. There's like a moral dilemma here. It's like, does AI read morality or does it read just like what's right? And so 

[00:19:19] Scott Jones: I. 

And who feeds it? That information, that 

[00:19:21] Treyson Marks: morale. Yes. Yes. So, so what's the term that we use for when it goes off in the wrong direction?

You just said it. Yeah. Hallucination might be like a, a poor term because hallucination sounds like it knows something, knows the right thing, but it's seeing the wrong thing. And maybe it's more of like, yeah, that's, 

[00:19:36] Scott Jones: that falls more into bias, I think when you get into the morality side of it, that's bias.

And bias can go either way. So when you say it's bias doesn't mean it could be bias. My, in my thinking, it could be bias to your thinking hallucination means it just flat out gave the wrong answer. It said one plus one equals five is more of a hallucination. Biases only get into that. Do we agree with what it's actually saying here?

Do we agree whether or not. This claim should be denied. That could be a bias based on the information being fed. I, 

[00:20:09] Treyson Marks: I think the term hallucination, I think it just bugs me. 'cause again, the same way, like humans and even animals are like a product of the environment that they're raised. And if you think about like this, AI was raised in this environment, so to that AI, this is the correct path or the correct answer.

And um, I 

[00:20:25] Scott Jones: think the term is because the AI, when it responds, is so confident. And AI doesn't second guess itself too much. If you ask it, how far is it from here to the store and it has 2000 responses that are correct and 10,000 that are wrong. It gives you the wrong one, and it's confident in that it's 80% accurate, even though it might not be.

And that's where you're absolutely right. We need to audit. People need to, and there's actually pieces of AI that exist to help this. So there's a couple things. They're small language models, which we don't hear about a lot, which a lot of people think are even more accurate. And they're nice because they're really focused to a single.

Type of organization. So if I am a, if I'm creating a chat bot and it is just doing banking policies for my service reps to type in a question and get a response and I fed it, all of my policies and all it's doing is bringing back that policy, we've created something very small for to look through and just know that, and that added in some prompt engineering to handle.

How people might be asking those questions, right? Because the way I ask it and the way you ask it could be completely different sentence structure. So we have to account for it. And then there's this other idea of people being able to correct it. Some people call it few shot learning. So the idea is. I do this, so I'm in Chachi PT and it gives me an answer and I know that answer's wrong.

I will correct it. I'll say, Hey, I actually tried this. This was a response I got. I was expecting this and it'll correct. Oh, I see what you meant. So now it can tape my. First question and my response, put that into a different piece of the training and relearn, hopefully even, maybe even learn from me specifically or the general public that say, oh, if people ask this, this is what they actually mean.

That is an audit type that really helps move things and, and make it better. So there is some of it built in, but they've got a long way to go. 

[00:22:24] Treyson Marks: Sorry. Now I have to ask then, is a small language model in that instance like something where it's like built on top of the large language models like chat GPT as a whole is like how it teaches and then it's interaction with your specific account like retrains itself To answer 

[00:22:38] Scott Jones: it can be.

That's more of the prompt engineering that you've prompt engineering all that stuff in a small language is just using, I don't need the whole world. I just need. Thanking information. And so you're limiting what that model is, so it has less to look through to, and that can help with the accuracy and speed of it because it's not reaching out to think about every possible answer.

And think about that in data, right? So if you open up your Power BI and you've got 15 tables of which we're only using three. Do you really need somebody dropping in those other tables to No, you just need your dimensions and your measures to go in there, so you, it basically, you're confusing the end user by providing them too much.

[00:23:20] Ensuring Data Quality for AI

[00:23:20] Megan Bowers: I think some of those things that you mentioned are interesting solutions, but I kind of wanna move on and talk a little bit about, about this issue of data quality, how we can really fix it if, if the underlying data is so important to these models, we need good data. There maybe gaps there. How do you see Alteryx as like a pre-processing step for AI to fill that gap?

[00:23:43] Treyson Marks: Yeah, yeah, that's a great question. So if we go back to all these historical shifts in analytics. Alteryx has been there providing solutions for probably each one of those. I don't have the length of experience that Scott does in the platform, but if you have a data warehouse or data in the cloud or data in the blob storage or whatever, Alteryx has done a really good job at like solving some of those accessibility problems or even like the ability for line of business users to get in and be able to do analytics without having to understand how to write SQL or.

I can't even remember what Hoops language was, but without having to do all of that stuff, I think it's probably a lot closer to like how we use Alteryx with BI tools. We think Alteryx is just a fantastic engine for those platforms because its ability to churn through data and do analytics before it actually hits those platforms so that like a Tableau or Power BI aren't recalculating every single thing, every sign, whole time you click on a button or every single time it refreshes, Alteryx does a really good job at just pushing what it needs there.

So in in the same way, this is something that Alteryx has kind of been pushing for the last like shoot since I've been in it, like the pre-data science tool and now it's just changed like the pre AI tool. And so we think Alteryx is a great AI prep tool. It is a great way to give line of business users, the people who are really interacting with the data and understand how the business runs or how a process runs, the ability to create these data sets and push it into the model.

There are still governance. Needs here. It doesn't completely resolve us of the need to be good stewards of our data or understanding what's in it, or that we're getting the right thing. Um, which I think just is reinforced by every part of this conversation we've had thus far. But Alteryx, again, great tool for allowing line of business users the ability to do what has traditionally been something that IT, or software developers or engineers were the only ones who could do.

[00:25:43] Scott Jones: Yeah, I think those guardrails are important. Uh, and the Alteryx does allow you to build in some safety nets of what you're doing through filtering and through processes that are repeating, so you're not accidentally getting new stuff in. You talk about the people looking for the, what Auto Insights does versus the way people used to do it, and we can call it dashboard ysi.

Right. So you've got a whole bunch of dimensions and measures. I'm gonna start throwing stuff into a chart until I see something I didn't know. And how many iterations does that take to actually find it? So you tend to find the things you know, or you find the things that. You are trying to prove, right?

So you have an idea. Let me prove that. With the data, so you're just throwing dice and seeing if you come up with what you need, and sometimes you get a yazi with AI and Alteryx. Alteryx can prep the data and let AI using what we that. Expert systems I mentioned early on that does the analytics can run through all of that and say, Hey, these are the things that are coming out as really important to answer your question and look at it from that experience.

And we go back and look at the history of Alteryx and I can give you a history of Alteryx for sure. I started with Alteryx one X and, and then we were really focused on, okay, the data, I need to be really accurate. I work for a big cell phone company and so. Where things happened was very important. So getting locations right, taking addresses and making sure that the location that we're matching to where an event happened has to be very accurate, which used to be very difficult to do.

And then you move forward and we get into where R came into Alteryx and you have to get that data in just the right way. 'cause R is really picky, so it's Python, really picky about your data being right. AI has a bit of a danger that it. Isn't as always picky of your data being right to give you a result.

So you have to be even more careful to make sure your data's accurate going in because you could make million and billion dollar errors based on your data being incorrect that you fed it in and trusting what this is coming back. But if you trust your data being accurate, you can trust the results that come out of it.

And thinking about. The medical industry and the impact that certain industries can be incredibly important. I worked with a large airline. They were doing scheduling and trying to use AI to better schedule when planes were gonna be arriving and leaving how long it takes so they could have the right amount of gates and the right amount of people in the right place.

But we also did one with Alteryx where we predicted how much fuel was gonna be needed. You don't wanna mess up how much fuel's gonna be needed for your flight. So if we let AI do that, we better be really confident in our data, and that's where the human comes in. That does consistent. Testing afterwards so Alteryx can prep that data and make sure it's accurate.

We get the results. I would also use Alteryx afterwards for auditing and looking at the data, which I do. I use Alteryx to audit my results all the time. So I drop things in there. I do different process analysis of it. I do different summarizations of the output and I look, do these match up with what? We know and what they should be.

And if it's looking really off, let's go back and look at our original process. So I would argue that it's both sides where an Alteryx type platform can be incredibly useful for getting things right. 

[00:29:05] Megan Bowers: I totally agree, and I think that it's similar risks in terms of running your business based on a dashboard and you're making these big decisions and lots of data is wrong, but now we're moving into AI and maybe the use cases, maybe the risks are just.

Getting even bigger. And so it's just even more important on the data quality side. So I like what you said about book ending it with Alteryx, maybe the prep as well as the validation steps. I think that is a cool takeaway. And I we're about to run outta time here, so I just wanna end on one final thing of.

[00:29:39] Practical AI Use Cases for Analysts

[00:29:39] Megan Bowers: How can analysts really gain value from AI? What's maybe one example that you both see as an interesting case of what AI can look like day to day in the business for a data analyst? 

[00:29:53] Treyson Marks: One of the best use cases has already really been flushed out in that like helping to assist write code, especially again for a platform like Alteryx.

It has cool capabilities, like regular expressions and regular expressions can be tricky to learn. Something like chat, GPT help you write these regular expressions could be really. Valuable. Same for like SQL Code or Python. I use it to help me with my Python code sometimes and it saves me a lot of time, especially on the debugging side, when I'm like, I'll get, hey, there's an error.

I'm not quite sure what this error means and the ability to put in my code and tell it what error I'm getting it, it really helps speed up that development process. So. I think that's a great use case for it. 

[00:30:32] Scott Jones: Yeah, that is one, that's one we both use all the time. I'm, I find my, I believe I'm pretty good at regex and now, and again, I just, let me just ask, and about the third or fourth time, the AI comes back and finally gets it for me after I keep prompting it through what I need.

For me, the use case, I think about things, and it goes back to then AI can do things a million times faster. Iteratively. And so when I think about the needle in a haystack type of use cases, whether it's fraud analysis, but also in in machine issues and looking for machine failures, I, I did a use case long ago.

We. Kind of used AI, but it wasn't really, but we used Alteryx to do it. But AI would've been a lot more convenient, I would say. And we were trying to find what was causing issues with cell phone towers in certain areas and why were they losing customers? And we went and started, let's look at the towers.

And we started analyzing towers that were, had the most dropped calls, and we actually found specific towers had a specific component within the cabinet that was failing. On a regular basis, and everywhere that had this one specific component had drop calls and they were losing customers. That is a real needle in a haystack that for me to go in there and even think about, but to go in and start looking to really hundreds of.

Of towers within those tens of thousands of different parts that might exist to find the one needle that might actually be causing it. And we did find it, and we did use Alteryx to do it, and we used machine learning with Alteryx to do it, but an AI could have come back and narrowed that haystack down for us.

So when you have this minuscule thing to find in a huge amount of data. Analyst can be really effective by using the massive power and speed of an AI to locate those things. 

[00:32:26] Megan Bowers: Yeah. I think both of those are great examples. Yeah. 

[00:32:28] Conclusion and Farewell

[00:32:28] Megan Bowers: I've really enjoyed having you both on the show today. Tons of insights about AI, tons of interesting use cases.

Excited to hear your inspire breakout session, so thanks so much for joining and listeners can connect with you guys on, on community and look out for that breakout session. 

[00:32:46] Scott Jones: Thank you so much. Thank you. Have a great day. 

[00:32:49] Megan Bowers: Thanks for listening to learn more about Scott and Treyson and DCG analytics, head over to our show notes on alteryx.com/podcast.

And if you like this episode, leave us a review. See you next time.


This episode was produced by Megan Bowers (@MeganBowers), Mike Cusic (@mikecusic), and Matt Rotundo (@AlteryxMatt). Special thanks to @andyuttley for the theme music track, and @mikecusic for our album artwork.