Doug and Kristi discuss the impact of profiling in the data sets used to train algorithms and the extended impact to decision making.
This is a topic of particular interest to both of us due to our respective passion for data analytics. One of the most prescient points that comes out of the discussion is true degree of difficulty for creating an objective data set for the purpose of training predictive algorithms.
Doug’s business specializes in partnering with companies and non-profits to create value and capture cost savings without layoffs to fund growth and strengthen financial results.
You can find out more at www.TerminalValue.biz
You can find the audio podcast feed at www.TerminalValuePodcast.com
You can find the video podcast feed at www.youtube.com/channel/UCV5a4QbT-dXhpgb-8HJHdGg
Schedule time with Doug to talk about your business at www.MeetDoug.Biz
Welcome to the terminal value Podcast where each episode provides in depth insight about the long term value of companies and ideas in our current world. Your host for this podcast is Doug Utberg, the founder and principal consultant for Business of Life, LLC.
Doug: Okay, welcome to the terminal value podcast. I have Kristi Yuthas, on the line or with us today and Kristi and I actually worked together a couple of years ago, teaching a finance information systems class at Portland state university. And Kristi is very generously, even wanting to talk to me again after that experience, which I thank her greatly for. And what we would like to talk about today is actually analytics, particularly the advent of profiling or racism and analytics and what we can do about it. Kristi, welcome.
Kristi: Thank you, Doug. And let me just say, I miss you in the classroom. That’s really fun.
Doug: It was, it was a very, it was, it was very illustrative. It was my first time teaching a class and I had, I came in with these great, these, these great thoughts of students who will be yearning for knowledge. What I found was that, not all but many really just wanted an instruction sheet for getting an A, so they could get out of class and go on.
Kristi: You know but I were, the that was the most dynamic, the night class. I’ve seen you know, these kids work all day. They go to class at night.
Kristi: And you just kept me and everybody else.
Doug: Yes, I remember I did tell a lot of stories.
Doug: Yeah, that, that, that class was a lot of fun. We will definitely have to find, find some time to teach together again in the near future. But one of the things that, Kristi has been doing quite a bit of work with is accounting analytics, because I think the data science of course, is really pervading everywhere really. But I think data science is getting is becoming especially important in the accounting profession because it’s thing it’s, you know, it’s really impactful in ways how different ways to, you know, either forecast results or to test for potential control gaps or test for fraud. But that’s actually not what we’re going to talk about today. What we’re going to talk about today is the place where data and analytics can actually get us into trouble because there’ve been some times when analytic algorithms have actually resulted in profiling, that is really not fair to the individual. Kristi, would you take it away from there after I served you up a nice juicy softball over the plate?
Kristi: Oh my goodness. There’s so much to talk about here, but, but just in terms of just even any basic analytics, so we get wrapped up a lot in the tools and in the coding or in the statistical analysis, and we really are likely to lose the whole context. You know, we just forget these are real people, these are real situations and we just get so embedded in the data that we forget. And I think part of that is the way we teach these things. I mean, we use textbooks where the data sets match up perfectly with the problem, and then the factors all line up perfectly, or your aggression comes out smooth with a nice, you know, are. And so we just, don’t, aren’t trained to really think about the messy world. What are the, what are the reasons the data look like they do in the first place? And then what are the consequences of the decisions that we made using that data? So that’s a big problem when people are making the decisions and it becomes a bigger problem when algorithms are making those decisions.
Doug: Well and I think that’s, that’s a, that’s a really precious point there because at least what I’ve found is there’s, there’s kind of two extremes, right? Extreme a is you have people who don’t believe in numbers and they just make it, want to make every decision based on their gut. And extreme B is kind of, you know, where you have sky net or Whopper making your decisions based on, you know, the, the amoral algorithms. And there doesn’t really seem to be a lot in between. It’s generally speaking management structures have a really hard time, you know, staying away from one or the other, you know, what have been your what are your observations? I can, I’ll be happy to tell you mine, but I don’t want, I don’t want to be the only one talking,
Kristi: Well, I’ve tried to train students that are in between, you know, and just slow down this algorithmic analysis until you really understand the data and why you’ve got the data.
Kristi: Because once those tools get into place, they’re sort of self-reinforcing, I mean.
Kristi: The learn, and sometimes, you know, the PhDs who are creating these algorithms have no idea what the album’s doing anymore. So it becomes kind of a black box and feeding it, the wrong stuff to begin with. It’s just going to cycle in, on itself and create these outcomes that you never anticipated.
Doug: Well and.
Kristi: So lots of examples of that but.
Doug: Sure. Well okay. Oh, go ahead and give us a couple, you know, I can I, and again, you know, I, I’d be more than happy to put my subjective feedback in, but, but I’m interviewing you, so.
Kristi: Oh yeah, no, I love to hear your stories too. So, like, just as this is kind of beside the point, but I just want to illustrate this in a visual way. So when, for example, a us machine or a water faucet in a public restroom.
Kristi: So those things you’re going to have a little camera. You stick your hand down there, the soap comes out, you stick the thing in your hand, under the water comes out. Well, that works great when you have white skin, but if those issues were trained on whites again, and you’ve got dark skin and you stick your hand up there, you might not get any. So we might not get water it’s very fresh you know just because nobody thought about that. You know, the people writing the code or predominantly white, the people they tested those machines on were predominantly white. There is nobody underrepresented in that room at any of those steps.
Kristi: And so the problem isn’t even know this until these things are out every airport, then all of a sudden we realized, Oh, we made a big mistake.
Kristi: So the kind of thing we’re trying to avoid at the outset, and you really have to take a step back. You cannot start dataset. You have to figure out where that data came from.
Doug: Well, I think that’s actually a. That’s actually a lot more impactful, I think, than the average person understands is, you know, anytime that you reverse engineer algorithms from a specific data set, those algorithms are going to be tuned to that data set. And so that means unless it’s a very, very, very broad dataset, then you’ll have a natural bias in those algorithms. And I think, you know, there, there are some cases where it can be fairly innocuous and there’s other cases where it can actually be, it can, it can actually be very harmful. Because I think an example we were talking about off camera is that, you know, if you let your algorithmic your AI run amok, you know, your, your AI may, for example, look at crime statistics and they might find that areas with higher densities of African-American demographics have higher rates of crime on average, therefore African-Americans are more likely to be criminals. It’s like, no, that’s, that’s not okay. That’s a, that’s a line that you can’t cross. And an, and an algorithm won’t know that unless you tell it. And you know, but the problem is that sent you even, even now AI and you know, RPA and all this stuff, it’s, it’s really kind of coming into its own, but it’s still a very young profession. And I, in a lot of cases, you know, it’s like you said, right. You know, the, the algorithm doesn’t know what to do, or doesn’t know where to stop unless you tell it what to do or tell it where to stop. And in a lot of cases, right. That the profession is still young enough to where people don’t, haven’t really thought of all the things to all the places to tell the algos where to stop or clearly to tell it what to do in a comprehensive way, because, you know, it’s, like you said, there, there are, you know, you, you, you, you have photo scanners that were, that have been trained on light colored skin, not thinking that, well, maybe there are some people with darker colored skin who would like to wash their hands too.
Kristi: Yeah, exactly. I love that racial profiling example, because if there was any bias in terms of who got arrested in the first place, let’s just say.
Kristi: Yeah, you know, black people got arrested at a high rate for doing this.
Kristi: The same activity. Well, so once your predictive algorithm tells you, you know, that neighborhoods, it’s a high crime neighborhood, you send more police into that neighborhood. So they start seeing more fun and they start arresting people.
Doug: And it’s self-reinforcing.
Kristi: Exactly. And so it’s, you know, it’s a no win situation. And if you don’t understand all the things that happened before you got to the, even to, with the data sets.
Kristi: To begin with you’re going to create algorithms that do that exact thing.
Doug: Yeah. And, and yeah, I think that’s actually, that’s, it’s a very precious point that I don’t necessarily know for sure that I can articulate the answer. Because I think it’s very, very complicated, but I think it’s something that, you know, it’s, it’s something that needs to be answered because, you know, I think the, that the AI isn’t going away, you know, algorithms aren’t going away and, you know, and, you know, database, decision engines aren’t going away. So we need to figure out some way to make sure that they’re programmed ethically so that, you know, you know, so the very full, you know, flagrant problems like this don’t persist.
Kristi: Yeah. So one of the major ways to address that is just exactly what you’re doing, just discussing it.
Kristi: Meaning these, these points to the front, because people aren’t aware of this stuff until they hear it. Once you hear it. And you’re like, Oh, of course, that might happen. But if you’re not aware of it.
Kristi: Your just coder and you’re, you know, we’ve been trained for so long into thinking that data are objective, they reflect reality. Technology is objective, it’s neutral. It doesn’t have any opinion on anything. So we can just start with the data and the technology, you know, and then we’ll get a result that’s reliable. And, and we’ve been trained in that the scientific method, we think there’s no politics or bias, you know.
Doug: Sorry. I’m, I’m suppressing laughter just being,
Kristi: Cause we don’t get a chance to really take a step back.
Kristi: But why did they come up with that theory in the first place who came up with that and what are their? What is the context that they came from? Why they would think that? and who gathered that data and why? You know, what were the circumstances under which that data occurred in the first place? We just really have to be backwards long way.
Doug: Well I mean, and because I think that the way that I think about, or that I describe technology is that I just, I think of technology as sociopathic. In other words, it’s not good. It’s not bad. It does exactly what it thinks it needs to in the most optimal way, with no regard at all, for morality, emotions, impact anything. It is, you know, and you know.
Doug: That’s the way and that’s the way that I think about tech CEOs too, because that’s what I have, that’s the behavior I’ve observed.
Kristi: We do have technology ethics class. I think like every, every computer science major in the country now has to have at least one, you know.
Doug: That’s probably a very good thing just because I, that’s probably one of the more disturbing trends I’ve noticed is that I think, you know, it’s, you know, technology in and of itself. I think it, you know, of course, right. There’s, you know, there’s exponential gains, but there’s, you know, of course algorithms are fundamentally amoral, right? They’re, you know, they’re, they’re not moral unless they’re designed to be moral and it’s hard enough to, to code them, to work properly in the first place, much less try to, you know, try to put some sort of, some, you know, to impart some form of a holistic, moral belief system. And,
Kristi: And that’s close everything down.
Doug: Right exactly.
Kristi: Huge backlogs of projects that we have to get out the door. And so sitting around and discussing the ethics when we’re not emphasis to in the first place
Kristi: That’s just not in the budget.
Doug: Yeah exactly. Yeah. That’s you know, that is, it’s a very significant drag on throughput. You know, but of course, you know, as you sit, you know, as you’ve seen, right. You know, you need to either be willing to put something out and then take it back and then re-engineer it. Or you need to be willing to take a very long time to figure, you know, to figure out some of these bigger problems before you release something. Otherwise, you know, otherwise you can run into a really tough situation.
Kristi: Right. And you can do a little of both of them.
Doug: Yeah, exactly.
Kristi: You know, because if you have a good audience, you can say, Oh, what are you forgetting or asking the questions? Like, why, why is this coming out this way? And not that way, then you can go back. So you don’t have to figure out everything.
Doug: Well yeah. I, I was, I was being intentionally pedantic, but, thank you for calling me out on the carpet.
Kristi: But to me, one of the, one of the most obvious and most important solutions to that is to get a diverse group of technology people in when you’re designing and architecting.
Kristi: And coding these systems, that’s not easy to do.
Doug: No it’s not.
Kristi: There was one article I read and it was just literally, maybe five years ago, all the black people working at both Google and Apple together could fit on one jumbo jet or something like that may have these statistics all wrong.
Kristi: You know, but percent of tech people from underrepresented minority groups is low.
Kristi: To get those people on your team, even if you’re, you know, so we have to figure out a way to branch out. So maybe we don’t get the MIT trained data scientist, but we get somebody who’s got the bigger contextual issue that they can bring something up. So, I mean, it’s really important. And then of course we need to invest in getting kids in system from a variety of backgrounds that’s.
Doug: Correct. Well, well, and, and what you’re talking about right there, that’s actually a rather complicated, that’s actually rather a rather complex situation because, and the reason why I say it’s complex is because, you know, the your traditional politician or administrator way, you know, way to solve, it would be to say, Hey, you know, you know we don’t have enough people of a certain demographic cohort at a certain company. That’s easy just force people to hire more people in that demographic cohort. Well, that may or may not address your core problem because, you know, one way is that okay, well, if they’re you know, it’s like, you know, if they haven’t been trained adequately to produce a quality product that could reduce your enterprise value, another way that you could run into a problem is that if they’ve been trained to think identically to keep, do it to people who at that have a different demographic profile, you could have diversity, but still have group think. And that’s actually one of the things that I keep thinking of is right. Just because different people have a different gender or different skin color doesn’t necessarily mean that they think differently.
Kristi: That is so interesting. Yeah. Because we, we just always hire people who are like us. So, yeah, we were all trained in these high tech programs that we think only the people that are also trained in very similar programs can do the job effectively
Doug: And, and, and cause yeah, I think that’s, you know, that, that there’s the tension that you have is that you need to have there’s obviously a certain degree of technical and business and industry competence that you need in order to be able to effectively do your job, but then you also need to have bring people in who have different ways of analyzing and analyzing and assessing problems. And I think that, you know, spanning that gap is, has actually been kind of hard because there’s a lot of concentration in the way that people are taught through school or whatever. And so I think that, you know, even in the case where you have a lot of people who look different in a lot of cases, the way that they’ve been taught to address problems is pretty similar.
Kristi: That’s because we think of these problems as technology only problems. We don’t think of them as social problems. That’s one of the things that I think you are excellent at. I feel like it was, we were teaching together. I saw this all the time. So you have.
Doug: A lot of fun by the way, we have to do that again.
Kristi: Do you understand the data systems and the technology, but you also understand like, those things are there to serve some business objectives, and then you had the business acumen to link those two. And I just think having somebody with all those different skills sets is it’s just very unusual. I think it feels like we err on the side of, you know, focusing on the technology and we just really forget about the consequences of these things as well.
Doug: Well, and, and also, and then the other thing you also have also have to bear in mind too, is that you know, is that, you know whenever you’re in a large organization, whether it’s, you know, educational institution, government organization, corporation, you know, if they get big enough to functioning actually looks very similar, regardless of which one you’re looking at. But it’s like every layer of management that you go through, you’re going to have somebody who’s judging tweaking, or somehow, or somehow, or another skewing, whatever either the numbers or message are. And so then by the time something gets consolidated up to the top, it could be very, very different from what matriculated out at the bottom. And, you know, everybody just assumes that it just consolidates up in a nice, you know, nice and evenly, and it doesn’t work like that.
Kristi: So I’m glad you said that. So that takes me away from racism kind of, but towards a different, but completely different point about how you’re communicating the results. So that’s another time when we act like, you know, the data speaks for themselves, like we’re just going to present the data and let the decision makers figure out what that means. It’s like, no, I don’t think that’s that’s okay anymore.
Kristi: I think you make a clear span and, you know, and, and say what you think you are seeing in that data and make that very clear.
Kristi: If you’re getting that data, then I think those messages are less likely to be forgotten or altered as you move your way up, title of your chart and things like that. Maybe it’s really important to have that clear, forceful communication about what it is you’re saying.
Doug: Well and also I don’t know if you’ve ever had a chance to read Danny Kahneman’s book thinking fast and slow. Okay. Yeah. I came out about, I think it was about six years ago or so. I burned through my audio book version about once every year or two. I think I’ve, I’ve run through it about five or six times, but it’s just utterly amazing because I think, you know, and for those who haven’t read it, first of all, go read it. It’s outstanding. But I think the, the, the short, short, short version is that people, as in basically all people, essentially have embedded biases in the way they think they make decisions. And the part that’s really tricky about that is that the smarter people are the less bias they think they are, but those biases really don’t go away. So what ends up happening is, you know, in the quote meritocracy, you end up elevating people who all think that they’re completely objective, but they’re actually imputing significant amounts of bias into every decision that they make. I mean, and, you know, so, like for example one, one that one thing that I saw in a decision making class that was just utterly amazing. Was it, what happened was you split the class into two pieces and you know to two parts and you had them in different areas. And you’d what you do is you show one, you, you put a little whiteboard in the front and you write a number on it, say, you know, one side would see something like third, and then you’d write 30, and then another one would write two, 200 and you’d say, okay, don’t pay any attention to that number at all. Now tell me, what do you think the population of Turkey is in millions and the people who had a reference of 30, that they were told not to pay attention to guests, lower the people who had a reference of 200, that they were told not to pay attention to guests higher. This is on average. So you have like 60% class, 30 in each side, they had a, they were shown a reference that they were told, had no bearing told not to pay attention to. And it still skewed their answers in a material way.
Kristi: Yeah. I’ve done that in class with a random numbers.
Doug: It is creepy. It is so creepy.
Doug: Yeah, exactly. And so it’s, you know, because what that means is that, you know, as a, as objective, as we all think we are, we’re actually very subject to some suggestion and manipulation, which you know, which is, you know, of course, you know, if you’re in sales, you should definitely pay attention to this, because if you could set an anchor, that means you could do very well. You know, but what that means is that, you know, if you’re, especially, if you’re making decisions you know, in a business or social contexts that have impacts on other people, you really owe it to the people that you’re serving, whether it’s the shareholders, who, our company, whether it’s people in the community, people in the university, you know, to really be putting in the effort to bring objectivity into your decision making process, because, you know, otherwise you can be thinking that everything is straightened above board, and you’re actually just running off a skew direction without even being aware of it in any way.
Kristi: Yeah. And that’s, so that brings us to your point about group thing.
Kristi: You know, the best ways to get around that is to get people in the room with you that don’t think like you.
Doug: That they don’t think like you. Yeah, exactly. Which, which feels counterintuitive because a lot of times you’ll go in circles about stuff and you’d be like, Oh, could we just get rid of these people so we could get going? And so, but yeah, I think that’s actually where in a lot of times what feels slow is actually, the is actually the optimal solution because that’s the way that you’re going to get those perspectives in to avoid those big stumbling blocks. You know, there’s things that have that, that cause you to have to rip out code, bring it back, and then completely rework it before redeploying.
Kristi: Right. Exactly. You know and when you think about sort of corporate ethics.
Kristi: We might think, Oh, it’s too expensive or slow to get all those people in the room. And we just really want to hire the experts. We don’t want to mix of people, you know, we want this to go quickly. But when we think about corporate branding, it takes so long to build up an ethical, you know, strong brand and reputation. And then just take some one,
Doug: Yeah. Right, exactly.
Kristi: Can we think that it’s too costly and expensive, but it’s so incredibly damaging if we make big mistakes like that.
Doug: Yeah. Well, and I think that and that’s actually you know a thought you just spurred is that I think that a lot of companies when they’re starting out, right. You know, when, you know, when you’re just starting out and you’re trying to grow, you know, the cost of mistakes is pretty low because you don’t have anything to lose. You know, you know, but once you’ve been going for a little while, the customer stakes starts going up. So what that means is as a company, entity, whatever starts growing, it’s really important to be, to be continually building those more robust decision metrics because the costs, you know, because the cost of mistakes continually escalates.
Doug: You know, and, you know, like if you’re talking to your Portland, state’s such a huge institution that a big mistake is very, very, very costly. So, you know, it makes sense to take a little bit of extra time,
Kristi: You know, and of course, just like everybody else where soul searching right now on these diversity issues. But for a long time, we recognized that a little minor things like, you know, you use, if you’re using all the textbooks written by white people or cases written by white people you’re just going to lack perspectives that is going to make it so that not only are you teaching on stuff, but you’re attracting people to the classroom. And so it’s just the cycle. So we have to, we have to think about what we’re putting out. And when we also have to think about how we’re welcoming people in to the, to these businesses. So, you know, having diversity on the team. So here’s an example in accounting, it’s a university. If the accounting department has one black professor, the black students that graduate are going have similar salaries to the white students, if there are no black professors, there will be a huge diversity in calorie levels of those students.
Doug: That’s interesting.
Kristi: So why there’s so many factors that could lead.
Kristi: To why that can happen. Just, you know, who gets called on in class, who sees themselves in that role, you know, who gets to network, who gets to be on the right project. There’s so many little pieces. And so you might think, okay, well, we’re, you know, we’re just going to the best trained person for this job. So white persons, we’re not going to hire anybody else. Cause we’re all looking at tiny fraction of the role, like that tiny fraction, you how good, the whole welcome program code, that algorithm, that particular type of algorithm that is a tiny fraction of that person’s job.
Kristi: And there’s so much more to it, but that’s the only thing we look at when we hire well
Doug: And then the other thing that I keep thinking about also is that you know, as you know, cause, you know, we talked about, you know, what, you know, what, whether we’re talking about algorithms or you know, or different diversity statistics is that, you know, at the end of the day, every person is an individual and deserves to be treated like an individual. You know, as you know, because like, for example, you know, for example, right. You know, we don’t want somebody, you know, you know, somebody wants somebody to be aced out of an opportunity because they’re, African-American, on the other hand, we don’t want somebody to be aced out because they’re not African-American. And so, but that’s, the thing is objectivity is a lot more complicated than people think people think, Oh, well, you know, we just hired the best. Well, how do we know what the best is? How do we know? Yeah, exactly. And I mean, and so, so yeah, meritocracy sounds really simple, but it’s actually really complicated. And, you know, because, you know, we kind of within that is, you know, not, you know, a part of, you know, a very significant part of, you know, maintaining that a diversity balance is you need to treat everybody really be able to evaluate everybody as an individual. And not just based on their demographic cohort or not predominantly based on their demographic cohort. And that’s actually really tricky because you need to overcome personal bias, you know, just because personal biases, natural, right. You know, generally speaking, you tend to be grabbed, you gravitate toward people who look and act like you, who it’s like, you know, if the majority of your management ranks are Caucasian means that they would naturally be gravitated to more Caucasians. But at, at the, at the other hand, you also need to make sure that you’re, you know, th that you don’t air too much on trying, you know, on just trying to, you know, you know, to, to fulfill a policy without evaluating the individuals. And I think keeping that in balance is really tricky. It’s a lot harder than most people think.
Kristi: Well, and then there’s this idea of tokenism, you get one person then you’re like, well, we’re good.
Doug: We’re golden, we’re fine.
Kristi: It has diversity. So you’ve got this one person or.
Doug: Yeah. And, and, and exactly. Cause, and, but, but then I think that’s, you know, that’s the thing we’d say is, okay, you know, what, you know, what is that mix of skills and perspective that we really need, you know, where do we find that skills, how, you know, and then how does that mix of skills change over time? You know, and then I think also another thing that you want to want to look at is, you know, it’s to say that, Hey, are the factors that we’re looking at comprehensive, or should there, should there be other things that we include in the mix or that we take out of the mix?
Kristi: Okay. That’s one place where the AI algorithms.
Kristi: Right. Because we cant.
Doug: Hey score one for AI we were kind of bagging on AI. It’s good to, Hey, I was making a comeback.
Kristi: Oh we can look at a lot more factors than these latent factors. We might be able to find patterns that we, that we wouldn’t have seen before. So, you know, so we might be able to treat someone a little more like a whole person rather than these 10 variables.
Doug: Yeah. And, and I think that’s one of the things you just, you touched on that I think is really important, kind of the way that I think about AI is that I think it is really intended and best use to augment and augment and improve decision-making. I, you know, I don’t think that you’re just making every decision off your gut is going to be rife with bias. you know, but on the other hand, if you just let the AI make the decision, then you’re eventually going to have Skynet from Terminator. And so I think that, you know, what you ultimately need to do is really have a, um, you really need to have a, you know decision community committee that ha that brings in diverse views, but then use your AI to be able to bring, to be able to figure out those things that the humans are missing, either the things that don’t matter anymore, or the things that haven’t been looked at yet.
Kristi: Yeah. Because they’re just such powerful tools.
Kristi: So a point where, you know, I mean, some, some been doing this for many, many years.
Kristi: But many companies are just at the beginning of incorporating these tools. And that’s the time when you really have to think through, you know, when you’re first incorporating them, how do we make sure that we biases into the data sets that we’re feeding these things? So, yeah. And that’s where we are now. So this is so important. I want to recommend this book. There’s a book called race after technology by Rooney.
Doug: That sounds Interesting. Yeah.
Kristi: Excellent. She just goes through example, after example, after example of where AI systems have failed and then what you can do about that. Yeah. No, really excellent. I teach a lot of blockchain courses. We have a blockchain program now at Portland state and the same thing, it’s new technology.
Kristi: Companies are just starting. So now’s is the time you’re going to build literally code your biases into, right. And so you ha you know, once those systems are there, especially in that kind of blockchain, you know, some things are reversible.
Kristi: Some contracts can’t be changed once you write them.
Doug: Well and I was going to say it go, yeah. You know, especially with the blockchain, I mean, it’s theoretically possible, but, you know, try it. You’re trying to, you’re trying to rip out and replace a blockchain. That’s, that’s really tricky because then that means you have to be able to, that means you have to reverse the entire transaction history, which if it’s dispersed could be all, but impossible, like you were saying at which point you’d have to deprecate your blockchain and create a new one, which has its own set of problems.
Doug: And so, so yeah, I think that as, and I think that’s actually really a really precious point, which is that, you know, there are a lot of core based datasets that have a fair amount of bias that have quite a bit of bias built into them already. And so the whole question is going to be, how do you, how do you control for that? You know, and, and how do you objectively control for that? Because, you know, for example, you can say, Hey, this data sets bias. So I’m just going to adjust this way or that way. Well, at that point, then what’s the point of having a data set, just, just make something up and put it in. And I think that’s the, that, that’s the tricky part is that is that, you know, if, if you want to make data based decisions, but you have a biased based dataset, how do you control for that? Or how do you adjust for that in an objective way that isn’t just making something up? Because this is the other thing that I noticed is that people love to say they’re making big database decisions, but they’ll put some kind of tweak in the data algorithm that just basically ends up codifying their bias. So I’m like, okay, well stop pretending you’re making database decisions. Just say, you’re making something up.
Kristi: Well, sometimes, you know, you do have to make the data up because the sample will be so skewed .
Kristi: Have enough of that, that, you know, minority of whatever.
Kristi: And that you can yeah so the database of all thoughts and a few cats, and you’re trying to teach your algorithm to differentiate, you need more cats, you might have to replicate those, make some, you know, find it from somewhere else.
Doug: Well, and because I think that, you know, what the diversity of sample, I think really shows, example really shows us is that the robustness of our datasets is actually not nearly as good as a lot of people think. And so then what that means is that, you know, if you don’t have comprehensive datasets, you don’t really have your foundation for making good database decisions. And so then a lot of things that we think are database decisions actually end up just being codified of whatever biases are built in, whether it’s, you know, one side the other, you know, up, down left, right North or South. And I think that in a lot of cases, we actually need to probably work on getting more robust data sets together. You know, before getting too fast on the you know, going too fast on the data analysis train.
Kristi: Right. And sometimes that’s impossible because you know, the world has been biased. The data are going to be biased. So here’s a simple example for me, university. So, underrepresented minorities.
Kristi: Graduate at a lower rate, they drop out more, maybe like a whole host of reasons. But so we look at that data and you’re going to say, Oh, so we know already these are less successful in college. So when we’re out recruiting at high schools and we’re bringing kids in and we want to bring the kids in who are going to succeed, right. And we already know that those are the, you know, majority kids. And so you’re just going to recreate that situation.
Kristi: Because there was bias in the data in the world. So the data aren’t accurately capturing what happened, but they are correct.
Doug: Yeah that’s.
Kristi: You know unbiased. The world was biasing.
Doug: Yeah, exactly. Well, well, and, and it’s, it’s funny because yes, this is, this is what I would call how do I want to say this? This is, this is manager thinking you, you would say, Oh, look, these people, these people have a lower success rate. And it’s like, well, chances are the reason why they don’t graduate as much as, because they run out of money instead. So then maybe what you should be doing is figuring out, okay, well, how do we help these folks get grants so that they don’t run out of money after a year and a half in school.
Kristi: Yeah. And that’s, you know, that’s one simple thing that we should be able to see, but then there’s all those tiny things.
Kristi: All those things that happen that make it less comfortable.
Kristi: For some people than others.
Kristi: To chip away at those things, you know, keep working at it, just recognize there’s bias in the world. There’s bias in our data, there’s bias in our algorithms. And we have to just.
Kristi: Trying to find it and figure out what to do with it.
Doug: Yeah keep figuring it out well. And cause I think that’s the other thing too, is that it’s like a, you know, in a lot of cases you don’t have the, you don’t have the, the luxury of stopping. So that means that what you have to do is just try to make things better as quickly as you can.
Doug: Alright. Well, I think we’re, we’re just about at time. So I’ll leave us with some parting thoughts, Kristi.
Kristi: Yeah. Well, my parting thought, you know, just think just really, really thing. You can’t get outside of yourself when you’re busy and you’re going fast. So just step back.
Kristi: And know what’s happening. Just try to see the context. If you can’t try to get some people around you.
Doug: Alright and give us the name of that book again.
Kristi: It’s Ruha Benjamin is the author. I think, are you, yeah, I can’t remember how to spell her name, but.
Doug: I’ll look it up. It’ll be fine.
Kristi: Race after technology, race after technology.
Doug: Race after technology.
Kristi: And I think the subtitle is something like the new gym code.
Doug: Okay. Alright. Well thank you very much. And hope you have a great rest of your day.
Kristi: Thanks for having me again.
Doug: Okay. So following up on that conversation with Kristi. What I really thought about was just the importance of really thinking about the long term impact of your decisions. And I think that’s actually something that’s really, really important right now at the time of this recording. The United States presidential election is technically passed but there is still rankling over vote, counting recounts, whether the other people who votes were eligible voters, etcetera. By the time you listen to this, that may all be distant, a distant memory, or who knows this may go on indefinitely. But one of the things that’s important to really think about is what are these long term impacts. What kind of precedent do you set when you make decisions? I think that’s important when you’re doing things like deciding how you’re going to implement AI, but it’s also important when say like a business or a nonprofit is deciding how is it going to meet its budgetary challenges? Because in a lot of cases, what businesses will do is they’ll say, okay, Hey, we, we need to tighten the belt. So we’re going to have to lay people off. A lot of times, that’s it can help in the short term, but it can create longer term problems. And that’s actually one of the things that I really enjoy about what I do with expense reduction analysts is because my business is specifically about helping businesses find cost reductions, find overhead reductions that don’t involve layoffs. Now you may use that to preserve some of your jobs, some of the jobs that your company, you may use that to make investments. You may just use it to improve your finances, but whatever that case may be, I am here to help. So I would please, I would really appreciate the opportunity to talk. Please schedule some time on my calendar at meetdoug.biz. That’s www.m-e-e-t-d-o-u-g.b-i-z. I would love to talk about how I can work with your company or with the company of somebody, you know, because even if you don’t have a you know, a midsize enterprise that needs expense reduction consulting, it’s almost certain, you know, somebody who knows somebody who is in a situation where they’re trying to make some really tough budget decisions and they could really use some help. And that’s really what I’m here for. So I’m really appreciate you listening and I’m looking forward to the next episode.
Thank you for listening to the terminal value podcast. Share it with your friends by sending them to terminalvaluepodcast.com. For more information please visit businessoflifellc.com for full access to Doug’s products and services.
All rights reserved. No part of this broadcast may be produced in any form by any means without written permission from Business of Life LLC.
All trademarks and brands referred to herein are the property of their respective owners.