Note: The I AM GPH podcast is produced by NYU GPH’s Office of Communications and Promotion. It is designed to be heard. If you are able, we encourage you to listen to the audio, which includes emphasis that may not be captured in text on the page. Transcripts are generated using a combination of software and human transcribers, and may contain errors. Please check the corresponding audio before quoting in print. Subscribe now on Apple Podcasts, Spotify or wherever you get your podcasts.
Maureen: Hi. My name is Maureen Zeufack, and you're listening to the I AM GPH podcast. In this episode my guest is Besa Bauta, who serves as Chief Data Officer and Chief Compliance Officer for MercyFirst, which is an organization that provides health and mental health services for clients in New York City. Besa is an NYU GPH alumni and NYU PhD alum, and currently serves as an Adjunct Assistant Professor at NYU's Silver School of Social Work alongside being a licensed clinical therapist. So she's got quite the repertoire and has engaged in a number of projects, including the Feedback Research Institute and a group which she co-founded called the Social Impact AI Lab, both with the goal of leveraging technology to improve mental health and welfare services for children and families, which she'll tell us more about in this episode. Besa and I also get into an insightful conversation about the use of artificial intelligence in mental health services and what the implications for that are. as well as how the domain has been impacted with the spotlight that the COVID-19 pandemic has put on the importance of mental health. You're not going to want to miss out on this fascinating episode. You're listening to the I AM GPH podcast, today on this episode we have...
Besa: Hi. My name is Besa Bauta, I'm the Chief Data Officer for MercyFirst, I also serve as a Chief Compliance Officer there. And because I'm not busy enough, I'm also the Chief Analytics Officer for Feedback Research Institute and I teach at NYU School of Social Work.
Maureen: Happy to have you on the podcast, Besa.
Besa: Thanks, Maureen. Pleasure to be here.
Maureen: Can you tell us a bit about your background and what brought you to NYU? Career trajectory-wise, so what drew you to pursue this line of work?
Besa: So I've always been interested in the human services sector. So what that means is that I'm interested in the welfare of the individual within their communities. And that is really important, especially now with social determinants of health, thinking about the person within their environment. And how much environment and context, including policies and procedures actually have on the outcomes and health and wellbeing. And that was an interest of mine earlier on before I even knew what the words actually meant. And then in my journey, I started out with refugee services in the former Yugoslav Republic, in the wars there. One thing that I ended up realizing is that war and conflict and trauma actually have long-lasting effects. And growing up in conflict-related situations actually affects health and wellbeing in and above just what's the medical model of disease. That's where my career started. And actually one of the professors at NYU was there during the Bosnian war, and she was actually working on the ground with Bosnian refugees, as well as the Kosovo refugees. And she actually introduced me to the field of mental health and behavioral health. She's actually a professor at NYU in the public health program. So I followed her, she became my mentor and actually my dissertation chair later on, which is a fascinating story, as far as how I ended up getting into the public health field as well.
Maureen: That's amazing. That's such a fascinating connection and to see how that's flourished into being, now you are a professor too, so that's amazing.
Besa: And it's important to have mentors that show you the way I think. So my mentor was Dr. Deborah Padgett, in NYU and obviously, she's a professor there. So her journey and a journey of one of her other students that was on the ground doing a lot of that work in former Yugoslavia. So she introduced me to the whole concept of, understanding the person, understanding the environment, understanding the importance of health and wellbeing, and especially mental health as part of disease. So as you mentioned, as far as my journey, I ended up going to the School of Social Work, because she teaches at the School of Social Work. And then when the program started in public health, I followed her into public health. And then I decided, you know what, I'm going to continue on, and she ended up being my dissertation chair. So, that path has to do a lot with mentors, and being open to your interests and following and pursuing those interests, are key.
Maureen: That's definitely a great nugget of wisdom to tuck in the back pocket.
Maureen: So you currently work as a Chief Data Officer and a Chief Compliance Officer for MercyFirst
Maureen: Can you elaborate more on what your position consists of and what has that experience working in that domain been like?
Besa: So, I mean, initially, like I said, I was more in the social service and the healthcare sector. I'm a licensed therapist, I ended up doing individual work. But my undergraduate from Rutgers started out in electrical engineering and moved into anthropology, a mixture between both. So what was really interesting is that I had the technology background as well as the statistics acumen throughout my career. I gravitated to looking at issues but looking at issues at a population level. It's great to have an individual impact, but when you want to have a larger-scale impact, that is at the aggregated level or the population level. So for me, it was always trying to figure out, do these services actually work? Are what you were doing, actually having an impact? I mean, it's great that everybody wants to help out, but is the help, the right help? Is it provided in the right way that actually does help rather than harm? And there's plenty of, throughout history as far as programs, that had really good intentions with really bad outcomes. Like “Scared Straight” programs or different juvenile justice programs that really failed to show those outcomes. I mean, everybody's hearts were set in the right place, but when you actually looked at the evidence, it was a wash pretty much, it really didn't have that impact. So my current role as the Chief Data Officer is looking at organizational data and seeing if that information can be used in a way to actually highlight what's happening within the organization, number one. Use that data for value, for decision-making, and obviously to improve outcomes. My other role as a Chief Compliance Officer is more of an ethics role. Obviously, ethics, information, and data is really important, especially now, is the information being collected ethically? Number one. Are we following protected health information guidelines so that everybody has the right consent regarding what information gets shared, how it gets shared, how it gets consumed, and how it gets interpreted, and actually, or shared even publicly? So all of those components are key and it's a good marriage between thinking about the projects and outcomes, but also thinking about the ethics and the use of that information in a way that protects the information of patients, so it doesn't get used, I hate to say nefariously, but unintended consequences usually happen. Nobody intends to misuse information, but sometimes things in hindsight, they're always 2020 that you didn't intend to do that, but it happened. So in a compliance role, it's thinking through, if we did this what would the consequences be upstream or downstream?
Maureen: And so with that discussion of the use of data and what the intentions are, what has your experience been working in health tech? And are there any areas, do you feel could be improved or evolved?
Besa: Yes. There's always room for improvement. We're always evolving, always learning, and I say the path is paved with lots of potholes along the way, but it's okay, as long as you're learning as you're progressing forward. I think especially now a lot of your listeners and others are learning as far as AI, artificial intelligence, different types of applications. I know all of us have been tied to our phones, relying on Siri and Google. I mean, I am from a time that I remember when we didn't have Google Maps and where we actually had a map and had to figure out how to get there. And use your own acumen to navigate the terrain. Now, we're so reliant on technology in lots of ways. I don't think every morning I get up, I ask my Google what temperature it is, what time is, what's my next appointment? So there's the digitization of individual lives in a lot of ways and it's become impervious in our lives as far as support, which is really great. But also you have to think about... it's providing a service but on the other side, you're providing a lot of information and protecting that information in lots of ways under state and federal laws for the benefit, obviously. So individuals have to weigh the pros and cons of that. And the same thing within organizations- that we do want to improve individual outcomes, but we also have to be very cognizant at what costs? So that cost-benefit analysis has to come into the calculus at every point in time, that if I use this information to create an application that actually makes your life easier, what type of information would I be collecting, and how will that information be used? And becoming much more transparent regarding the utility of that information is really key. And a lot of corporations are becoming much more transparent and the same thing with the European Union passed a pretty large data privacy law and protections, it's called GDPR, General Data Privacy. And one of the regulations was that there was going to be a greater transparency regarding what information you're sending to Google. How are they using that information? Are they selling it to third parties? So what's happening with this information? And especially now during COVID, a lot of your healthcare information is used differently. For example, vaccine tracking, COVID tracking, all of that. And it's important to understand that the city and state systems are using this information, but hopefully, they're using it in the right way to improve public health outcomes. But it's also important to also note is this information being used for research purposes, but what else is it being used for, and have that transparency across the system I think it's key. That's some of my work is working with constituents internally, as well as externally thinking through how do we use this information to improve outcomes, but what are the barriers and challenges that we need to think through and address before we do use this information?
Maureen: Thank you for providing that perspective.
Maureen: Yeah. So you're somebody who has a lot of hats and a lot of things going on and so something I wanted to ask more about is, could you share about your project with the Feedback Research Institute and what it entails? What's been most rewarding about your work on this?
Besa: Sure. So as you say, Maureen, I wear multiple, multiple hats, sometimes I keep forgetting which hat I'm wearing at that moment in time. So the Feedback Research Institute was founded by Leonard Bickman and Thomas Sexton. Dr. Bickman is a Professor Emeritus at Vanderbilt and Thomas Sexton is a developer of an evidence-based intervention for children and families at risk. So both of them developed the Feedback Research Institute and they were interested with the work that I was doing through a colleague of mine for the Social Impact AI Lab. So two years ago I had this brilliant idea of taking what's happening in the technology sector and bringing it to the social service and the human sector. I wasn't sure exactly how any of those ideas were going to gel. And what my idea was is thinking about what information can be consolidated from multiple sources to get a really better view of the individual within their environment? So it tied back to a lot of my public health studies and work that I did, thinking about the social determinants of health. So I ended up applying for a Robert Wood Johnson Panelist Health Award for Social Determinants of Health, and we ended up winning that challenge, that national challenge. And the goal of that challenge was to think about our individuals that we're serving, and my organization serves children at the highest risk and highest need. Mostly children in child welfare, or refugee children, or migrant youth. And thinking about how can we get a really good understanding of that child within his environment? So pulling in educational data, pulling in social service data, pulling in mental health data, pulling data from electronic health records, to get a better understanding of the child's strengths, as well as risks, but also understanding the child's family in a better way, a more holistic way, including their community. So then the intervention can be tiered not only to the child but to their family and also to their community as well. Because like I said, this comes to bear that, I mean, we see this, especially now being highlighted that the environment is really key for our health and wellbeing. But thinking through how can we use this information in a way that can be consolidated and weaved together to actually provide the organizations with a real-time view of what's happening internally number one. And take every single data source that they have, group it together in a way that provides a 360-view of that patient, or that individual, or that client. And the goal of that would be to make that information much more transparent, usable for the organization and to be able to use that for decision-making including outcomes.
Maureen: Okay. That's fascinating. I think it's really interesting that you provide that perspective of a holistic view of a person in order to adequately assess their needs and their situation.
Besa: Yep, absolutely.
Maureen: So you just touched a bit on the AI Social Impact Lab, what kind of work is being done specifically from that lab and how has that been able to grow in the past year?
Besa: So, as I mentioned, we did apply for the Robert Wood Johnson Foundation. Initially, the way the Social Impact AI Lab started was sort of kernel of an idea. Our current Executive Director of the current organization, our work for MercyFirst ended up having a colleague reach out to him that there is a new venture thinking about how we can bring in artificial intelligence into the human service sector and social service sector? So that sparked an interest and an idea in my head, and I'm like, there's really not enough investments for community-based organizations that are providing the highest services for the greatest folks in need. There's really a lack of technology in this sector. So for me, it was that, why are we working with Excel sheets and getting the crumbs? Why aren't we working with technology companies very differently as far as transfer of that knowledge to this sector? Because it would have the greatest social impact, the greatest return on human investment in a lot of ways. So I ended up partnering with two other not-for-profit organizations, New York Foundling and SCO [Family of Services] and two of my friends and colleagues there, and had an idea of, can we come together with different technology companies and have them, as far as providing us support and transfer of their knowledge, including their technologies. Obviously, at a no-cost or a zero cost because community-based organizations that provide hard need services usually do not have large funds, we're usually contract-based. So, working with these technology companies that do have the social responsibility and the social good aspect to them, and thinking about diversity and inclusion as part of that. I mean, that was a key, especially now for a lot of the technology companies to think about this sector and think about how can we improve lives? And the way they can do that is actually help us advance, to provide the most appropriate services by helping us design infrastructures that support those outcomes downstream to the most vulnerable populations. So we ended up partnering, the three of us, that's how it was founded. And we ended up applying to the NYU Entrepreneurs Challenge, the 300K challenge. We ended up being semi-finalists there and then obviously COVID hit, so everything went into hiatus, froze for a while. But then at that point in time, we made the decision to halt all activities for that point and then figure out how this pandemic was going to play out, what would [it] look like? And then at that point, we decided to reenter the challenge for this year for NYU. Again, we're semifinalists in the current challenge.
Besa: Thank you. Thank you very much. And actually, streamlined a lot of our current relationship with different technology partners like Microsoft is one of our partners. And having Microsoft actually as part of their tech for social impact, which is really great because they have a division for social impact and they're working with non-for-profits and others to help us accelerate and digitize our infrastructures. And we wouldn't be able to do this obviously if we didn't have the likes of Microsoft, Solunus, UiPath, to name just a few are the corporations that are actually working with us to help the sector digitize. To help us not only structure environments, but also helps us use our information in a way to develop different applications, to provide real-time feedback to our clinicians. And obviously, if you have real-time feedback, as I was mentioning just like with your Siri, if you could ask Siri, or if you could ask whatever the application that we’re creating, what are the next steps? And then follow that including evidence-based guidelines that just doing that would definitely improve outcomes for our children and families.
Maureen: So with that AI and mental health are often viewed as existing in separate spheres, so what brings these two concepts together, and what makes leveraging tech and AI for human services work?
Besa: I don't know if you noticed, the pandemic highlighted one really important factor, mental health. Who thought that humans are social creatures, right? That mental health is important.
Besa: So being isolated obviously raised stress, raised anxiety, not knowing what's going to happen next actually highlighted a lot of those issues. And you saw a huge rise in demands for mental health applications. Before you never saw commercials for headspace. Now you were seeing commercials with that little bubble going along or different types of therapy applications. Within 2020, you had over 1000 new different organizations that are specifically focusing on mental health. So you had over a thousand different types of applications, including corporations, just starting up. New startups in 2020 to address mental health. So obviously they saw a need and the need is that, based on this pandemic that mental health is a really important component. So if you think about all the negatives or the lives lost in this battle, the positive is that it put a huge spotlight on this area of mental health, the area that I’ve been working on for the 20 years, past 20 years, that in a way that is really important. There's no separation between mind and body, there's no Cartesian dualism between both, that they're both just as important. And the mind is sometimes a little bit more important than the body because the way you think, the way you behave, the way you act has a lot to do with how well you do, even with your medication, your regimen, or even stress, right? When you go to the doctor, the doctor will tell you, "Now, you need to be a little less stressed." So mental health is really important. So I think about artificial intelligence applications, that technology sphere, and combining that with mental health is just basically accelerating what's already there in the market. And thinking about how can we take applications that can provide us information in a way that's timely and relevant? So it's not the black box of artificial intelligence, it's a catchall phrase. It's looking at the data and seeing, based on these safety and risk factors, what are the potential next steps as far as therapy intervention? So you could have a clinician that's providing an intervention that's manualized, it works better if it's manualized. And they're going through this manualized approach during the therapy session, asking certain questions. So their artificially intelligent application would augment that therapy by highlighting to the therapist, you've asked three out of the five critically important factors, you're missing two. So it's basically putting the guide rails in the therapy process and augmenting that therapy. Not taking the human out of the equation, but helping us to do our work in a better way. So for us, it's building these applications that provide measurement feedback, real-time feedback to the clinician and therapist for them to augment the therapy practice or augment their interventions, so they're much more effective, number one. But also it's much more tailored to the child and individual because you're taking the individual context as part of the therapy practice. And all interventionists actually do this without really thinking, but highlighting certain things, making it much more mindful for them as they're going through their therapy practice.
Maureen: Wow. I think that's really great to hear. I think something that you said that struck me was, it's not taking the human out of it, but it's augmenting the whole process. And just like you were mentioning, we use Siri every day, I think it could be used to better tailor the needs. So I think that's really interesting, I hadn't pondered that before.
Besa: Absolutely. I mean, we're so reliant on technology that I see my niece and nephew they're so adapted to, it has become second nature. It's not that it's taking away anything, it's supporting us to do other things. It frees time and helps restructure. Obviously, it's really important to think about what are the recommendations, the correct recommendations, right? Because everything can be biased, all our data is biased, biased in multiple different ways. Without even thinking about it, we bring our biases into everything. So it's really important to be really aware of what you're bringing to the table. What is this application surfacing in a way that doesn't skew the therapist's intervention negatively? And that's also really important as far as whatever application gets developed, gets developed with that in mind. And that's where the compliance piece, the ethics piece is really key. And we've seen this a lot of times where there was one case, I can't remember if it was Florida or Minnesota, but it was a juvenile justice application that got developed. I think it was called Kompass, where the algorithm would actually highlight if an individual was at a higher risk than somebody else to re-offend. And what they figured out is that the algorithm was biased because the data that was being fed into that was biased. So it's really important to have that a priori, looking at the information and looking at potential opportunities to improve that information, to improve that data downstream. So you don't get those errors or that bias upstream. It's a work in progress, it's really not... Sometimes we're not aware of our own biases and how that gets transferred to the data and how the data gets collected.
Maureen: Exactly. There are certain people, I'm sure even those who work in your field, that are wary of potentially a dependence on AI in addition to human services. So are there any innovations in the works with AI to address this kind of trust issue?
Besa: Let's see, the trust issue I think the AI can help somewhat, but it's a human issue. Ethics is a human issue. Human rights is a human issue. That's why it's called human rights. In a lot of ways I mean, the onus is on the end-user and the individuals and the developers themselves to design applications and actually think through the design earlier on and take bias into account of any equation. So if you have any statistical model, you have to take into account biases, right? Data that's missing, the data that might be skewed in a certain way, and have that a priori built into the application. As well as building human feedback mechanisms along the way with different constituents, from different groups. Looking at this information and saying, "Wait a minute, this doesn't make sense. This disproportionally says that this individual needs more therapy versus somebody else, but if you take that away, there is something wrong here that this other person needs it just as much." That why is the algorithm choosing one individual over another, as far as needing an intervention or needing therapy? That's always really important to think about, how the information gets collected, number one? How does it get used? But the human in the middle factor and the ethics has to be hand-in-hand throughout the development of any type of application. I think that's really, really key and I hammer that on for a lot of the meetings that we have and actually my colleagues as well that they're very focused on not only equity ethics, but also justice and beneficence, I think that's really key for any sort of intervention. That's also key in research as well, do no harm.
Maureen: Right. That's always a goal. So you've mentioned how COVID-19 has exasperated mental health concerns, has the virus or the pandemic influenced research being done in your field at all, and how has that happened?
Besa: So I think COVID had a couple of positives that I could see. Obviously, there is plenty of negatives, nobody wants to be in this pandemic because of social isolation. On the positive side, it did raise awareness for mental health. Another positive, as far as COVID, as we're doing now, everything is online and virtual. It has some benefits, it has a lot of drawbacks as well. Another really important key is that it accelerated a lot of the technology that was being used in the sector, all the healthcare sectors, and many other sectors, even education. My colleagues in the educational field, K through 12 universities as well, had to pivot pretty quickly to online platforms. They were doubling with massive online courses and things like that before 2020, but it wasn't as mainstream as it is now. I think now it's common. Of course, my classes are over Zoom. Of course, I have an assignment that's going to be graded by Turnitin. Those are recent developments in the last 10 years and then COVID pushed... It was a tipping point using Malcolm Gladwell's analogy, that tipped us over to becoming much more comfortable with these technologies, number one. And not having those barriers as far as what does it mean to do this? Can we explore other ways of interaction? So I think it's like a wild west. It opened up this other door for us to think about, there's different ways of learning. There's different ways of doing. It has benefits. Online learning does have benefits, to do some things online and some things offline, obviously, for individuals that have disabilities or other learning challenges. And same thing, even for the human service sector, or the mental health, or the child welfare that we could do a lot of the services online and remotely. That we could assess safety and risk very differently. We could have those casework contexts or just case management online. I think some of those barriers got removed. I remember having conversations two years ago and saying, "Why does this child have to travel? Why can't they just have this conversation over the phone, over Zoom?" And there was this huge barrier of, "No, no, no, no, no, but that's how it was done." And I think I'm glad in a way that COVID opened up the flood gate and said, "No, that's not the only way it can be done. This is very narrow-minded. There's different ways of working and interacting, that providing this child with a phone and internet access means that he has access to a whole new way of communicating." Shifting the whole paradigm of having to do in-person, there's different combinations and allowing us to think about what those combinations can be, I think is really fascinating. So COVID did accelerate the technology in a way that changed our minds. It was a catastrophic event that said, "Yes, we were set in our ways, but there's so many other ways to do things." So I think this is also a great moment to reevaluate how we do things and what can we take from this? What are the lessons learned of what worked and what didn't work? So it's an exciting time because it's also a disruptive time. And usually, when you have disruption, you also have really great innovation. In almost every single time you’ve had disruption, you have had great innovation. So I'm really excited to see what's going to come in the next five or 10 years as the society is changing and new ways of thinking and new ways of working. I think that's one of the other positives of this pandemic. I'm not harping on the negatives because the news harvest all that negative all the time.
Maureen: Right. I think we're all aware of the negatives. So I think it's great that you have that I guess, foresight and positive mindset of looking at the exciting innovations that are sprouting from this.
Besa: No, and I'm excited to see what's going to happen in the next few years, as the new generation of students are graduating and they have really great, bright, interesting ideas along the way. And even them examining the pandemic allows us to examine each of us in our environment and the way things are working even for you or me and say, "This is not working for me, how else can I change that?" I think that is really amazing.
Maureen: Definitely. Definitely. So my final question is what motivates you to do the work, put the hours in, and do the research that you do?
Besa: That's a really good question, Maureen. What motivates me? What makes me get up each morning and say,"Hey! I've always been interested in what can we do to improve health and wellbeing and especially mental health. And it has to do with personal experiences related to mental health. I grew up in the former Yugoslavia, I mean, Macedonia. So my experience was always clouded with safety and wellbeing. So if you think about, my daughter mentioned the other day, "Maslow's hierarchy of needs." It's really important to think about, if you don't have safety and wellbeing, it's really important as the key foundation because you can not think about anything else afterwards. So that has always stuck with me that having those components of safety, wellbeing, support and human support are key ingredients as far as outcomes and health and wellbeing actually. And that followed me throughout my career, and working in child welfare I get to see the worst of humanity in a lot of ways. But then I also get to see the best of humanity that there are some amazing selfless, heartful, I mean, you see the heartlessness, but then there's some amazing individuals out there they give in day in and day out, and it doesn't pay that well, number one. They could get much more money somewhere else, but they do this because they really want to improve outcomes. And there's a heart and soul into that in a way, and I think that drives me when I see that every day that there's this huge group of individuals out there working to improve health, improve well-being. Improve the lives of others that maybe didn't get a good chance in life in a lot of ways. And I think that's what drives me is that what else can I do to improve outcomes? And for me at my current stage in my career, I think what can have the greatest return? I mean, I did the individual therapy, but I think as far as thinking of public health, what can you do? What types of interventions can have the greatest impact? And having even the students to think about what would be at a population level, either policy, intervention, regulation or whatever else that you could create, that could either examine or actually build a product or build something that can actually change lives in a way that makes it better. And that drives me every day that what can I do to actually improve that?
Maureen: That's wonderful. Well, I appreciate you taking the time to speak to me and sharing your story and everything that you do. Thank you for coming on.
Besa: It's a pleasure being here. I'm so excited. I love the public health program, I learnt so much and I'm so glad that I did that. I keep telling my family was one of the best experiences, that it changed my perspective. My training was very individualistic at that time and moving more into population level outcomes and thinking through things in a different way, at a larger scale is really key. And even now at COVID you get to see how herd immunity and population level factors is really important. How that pathway from individual to population tied together, they were part of a web, part of a chain. What one person does affects another, so as the pandemic started we thought that, "It was isolated in China." And look, we're in the midst, it's a global pandemic. So that highlights public health and mass, that is really important to think about these things at all angles, and that's one of the benefits of this program.
Maureen: Yes. I hope that our listeners who are all part of this greater web enjoyed listening to what you had to say, and your insightful knowledge.
Besa: Thanks, Maureen. Pleasure being here, and thank you for listening to me and my story.