Conversations on Applied AI

Younes Amar and Mohammed Sabri - Operationalized ML and Managing Customer Expectations

September 27, 2022 Justin Grammens Season 2 Episode 25
Conversations on Applied AI
Younes Amar and Mohammed Sabri - Operationalized ML and Managing Customer Expectations
Show Notes Transcript

NOTE: I am excited to announce that all listeners of the Conversations on Applied AI Podcast are eligible to receive a 50% discount at the 2022 Applied AI Conference! Just use the discount code of "podcast" when purchasing your ticket.

The conversation this week is with Younes Amar and Mohammed Sabri. Younes is the head of product at Wallaroo, an enterprise platform for production AI. As a product leader, he is responsible for vision, strategy, and roadmap and for building product teams that use qualitative and quantitative data to deliver effective product designs and experiences. Mohamed Sabri is the founder of Rocket Science and the ML Ops Institute where he is developing the next generation of ml ops engineers using a data science mentor at MIT and wrote a book on data science essentials called Data Science Pocket Guide.

If you are interested in learning about how AI is being applied across multiple industries, be sure to join us at a future AppliedAI Monthly meetup and help support us so we can make future Emerging Technologies North non-profit events!

Resources and Topics Mentioned in this Episode

Enjoy!

Your host,
Justin Grammens

Mohamed Sabri  0:00  

AI is not a magic box that are just gonna plug you know in a wall and that city will answer all your life question. I see it a lot in chatbot use cases, I'm always you know, I have always alarmed when a customer is looking for a chatbot because I make sure that they understand the maturity of a chatbot today in the market. So with that we can manage that expectation.


AI Announcer  0:25  

Welcome to the conversations on applied AI podcast where Justin grumman's and the team at emerging technologies North talk with experts in the fields of artificial intelligence and deep learning. In each episode, we cut through the hype and dive into how these technologies are being applied to real world problems today. We hope that you find this episode educational and applicable to your industry and connect with us to learn more about our organization at applied ai.mn. Enjoy.


Justin Grammens  0:53  

Welcome everyone to the conversations on applied AI Podcast. Today we're talking with the Younes Amar and Mohammed Sabri. Younes is a head of product at Wallaroo, an enterprise platform for production AI. As a product leader, he is responsible for vision, strategy and roadmap as well as building product teams that use qualitative and quantitative data to deliver effective product designs and experiences. Mohamed Sabri is the founder of Rocket Science and the ML Ops Institute where he is developing the next generation of ml ops engineers use a data science mentor at MIT, and wrote a book on data science essentials called Data Science Pocket Guide. You can find his book on Amazon. So thank you Younes and Muhammad for being on the program today. 


Thank you.


Younes Amar  1:35  

Thanks for having us.


Justin Grammens  1:36  

Excellent, guys. Well, great. So having you both on the show here is a treat as you both live in this world of ml ops. And we haven't really delved in super deep on this program with regards to sort of like what that means super excited for our listeners to, to really get a glimpse into two awesome companies that are doing work in this space today. And so, Younes, maybe I'll ask you to go first, maybe you could give us a short background in terms of the trajectory of your career, and maybe how you got into ml ops and what wallaroo does.


Younes Amar  2:02  

Absolutely. So I started my career and software engineer and transition to product management, somewhat by accident, really, the type of companies that I've worked for have always had this focus on data analytics, and therefore machine learning was a natural graduation of working on analytics. And really, throughout my career, I have seen instances where we had to work on experiments to be able to get to insights from data, but really, what got me into ml ops was the helping to deliver that last mile on those insights to be able to operationalize the machine learning or the or the data science, for the lack of a better word to produce AI that drives an outcome. So I discovered that when I was working at clear cover insurance, when we had some, you know, retention issues, and I was able to basically build a team there to help deliver our first you know, ml models in production that helped drive the retention from there. Yeah, it was really our first, you know, my first exposure to it, and a lot of learnings from that, that I was able to take with me to my next adventures with Tempest labs. And also right now with wallaroo.


Justin Grammens  3:09  

Awesome, was ml ops known as a word or a thing in this space at that time,


Younes Amar  3:13  

I would say a couple years ago, it was not well known, it was more understood, as you know, this discipline that takes machine learning and puts it in production. So it's still even though the word exists. And it's talked about, I think, the understanding around it, it's still evolving, depending on the maturity of the businesses using it, and at what stage you are with your AI initiatives, for lack of a better word. So if you're still building experimental AI, yeah, you have some ml ops, if you're building industrial, industrialized AI, your ml ops practices should be very well advanced. So yeah, it's still evolving. That's probably my my take on it.


Justin Grammens  3:54  

It's great. That's great. I love to get more into industrialized AI and sort of like what that means. I'm sure we'll we'll we'll delve into that. Muhammad, do you want to maybe take a shot here, talk a little bit maybe about your background, how you got to where you're at today and what what rocket science does?


Mohamed Sabri  4:07  

Sure. I started my career as a more into data analysis work. The majority of my career as a consultant or freelance been a lead data science for a couple of years leading practice of data scientists and data engineers. And a year and a half ago, almost I started rocket science, which is a company that develops solutions in the field of machine operation. Today, we do mainly services so we help companies build their own enterprise ml ops environments. One get me into ml Ops is actually three years and a half the four years ago, I had customers when I was developing machine learning models for them. They were asking me, a Muhammad, can you deploy those models can you put in place everything we can to potentially manage the lifecycle And I'm always like, open to learn new things. So I was like, yeah, why not do it once done second time. And third time, I was like, yeah, that's pretty much an interesting field. Why not building an expertise in that, and that's how I started, like having a focus in in machine in operation. And as it failed, it keeps gaining traction, the more AI gain traction, and today, there is a need for ml ops as a lot of data science machinery solution. They have delays to go in production, or they don't even go into production. And sometimes the reason is related to how companies manage the lifecycle often machinery models,


Justin Grammens  5:43  

Archer, so there's some sort of like, say they're they have models that are deployed, but what you're focusing on is making sure those things are up to date, and they're continually being retrained.


Mohamed Sabri  5:52  

Yeah, sometimes they don't. And sometimes there are delays. I mean, based on the statistics, this is a couple of years statistic. But I think in some companies, it takes up to nine months. So the mountains are live in production, but just not to develop them, but more to put them in production. And that's a problem because it shouldn't take long, that long to start having value over your machinery models.


Justin Grammens  6:17  

Oh, for sure. That's like many decades in the software world, I guess, right? It's just, you can't you can't be waiting that long. At rocket science. Do you guys have your own proprietary solution that you use, or you kind of use what the client wants to use and sort of stitched together using using off the shelf? We started


Mohamed Sabri  6:32  

this year developing our own technology, we got granted by the Canadian government, like we've got grants for developing an innovative solution in the field to support what we do in services. But today, so far, we use customer technology. We're quite agnostic. So my multicloud I mean, we had project on GCP, as you deal with as well, AWS. So yep, that's pretty much the landscape. Alright.


Younes Amar  7:00  

Yeah, that's, that's good. That's


Justin Grammens  7:02  

what that's what we find that my company is the a lot of companies, they have built a lot of infrastructure in place, and they're not ready to pick up things and sort of move. So we're pretty flexible that way. I walk through you guys, you guys have your own platform, though, is that is that true yawns?


Younes Amar  7:15  

That's right, so we're specialized in what we call the last mile of ML, we're a true ml ops platform focused on basically the deployment management observability. And optimization of models in the context of production. This means that, as a company, you might have reached your what we call time to insight, as in you have developed a model of your model is ready to be operationalized. And integrated with the downstream systems or operations that we'll be consuming it to produce that value, that or that ROI on your investment in machine learning. That's where we come in, actually, and help with that operationalization of machine learning. So the idea is that we support not just the, you know, getting into production, but helping you scale and helping giving you all the tools to be able to understand what's going on in real time be able to take the necessary, you know, proactive, preventative and corrective actions on your models, as as they start drifting, as they start presenting anomalies, as you know, they start presenting bias, all these things happen when you put your model in production. So the idea is to try to get ahead of it as much as possible and be able to take the necessary actions, that's really the fun of, as I mentioned, operationalize ML is you get to start seeing, you know what the model is, is doing because when you move from a fixed training set to a production set with, you know, volatile data, you start seeing some weird things. And that's where you get to learn the most, as a data scientist, you'll learn a lot in production. So this is somewhat not what's happening with software engineering, because when you go to production, you have ironed out all the kinks and you know exactly what's happening. Because you're somewhat working with metadata, it's set in ML and model and it's a data. So unless you can control the data, which is not the case today, you have to have the proper monitoring techniques and, and the ability to basically react to it or even proactively anticipate where you know what your data is going to do to your business through the ML bet that is consuming it. So you're able to course correct as needed. That's the biggest blind spot in ML ops right


Justin Grammens  9:20  

now. Yeah, I was I was gonna ask, what are the problems? I mean, one was speed, basically, or act or time to deploy your model that sounds like that's a huge pain point with all your customers that both you guys are working with. What are some other pain points, I guess that you feel like you're coming in to solve and I'll open that up to either one of you if you want to share? Yeah, I can


Younes Amar  9:36  

actually start on the pain points. You said it very well. The speed to deployment is certainly a challenge. It's challenging, operationally speaking, and technologically also because some of the technologies that are out there, they have been somewhat repurpose to do deployment of machine learning. That was basically we're using things that were done for deploying software and right killer applications. So that's, that's challenging because you're talking to a different set of users. Now you're talking to ml engineers, you're talking to data scientists, to date fully understand those tools. Not always. So there's certainly a user friction that comes there, which, you know, slows things down. But also, yeah, just the overall, you know, cultural shift that now it's everyone can train models. But does everyone have the ability to know exactly how to get value out of them? So so that's why the the challenges that would make deploying slower and there are a lot of technologies there, a lot of companies are certainly starting to figure out ways to speed up deployments, you can hear about deployments in minutes, deployments and seconds, like, like wallaroo. But it's only the beginning of the journey. It's truly just your it's just the tip of the iceberg. From there. How do you know that you have deployed your best model? And when you figure it out for one or two or three models? Does it scale to 1000? When you're basically your entire, you know, as we go back to industrialized AI, if your entire operation is running on machine learning, there, we're talking 1000s of models that are running, how do you actually make sure that you're not burdened by the scale of what you're doing? How repeatable is it when you start going from five models to 20 to 50, to 1000. So that's, that's really the other challenge is the repeatability of it. And then the last one, which is the blind spot, I mentioned, which is the ability to observe and take action on those models as they start drifting, as data trends change. As we start seeing, again, some weird results, like what happened with some credit card applications that were getting denied for, you know, a specific bias and over the attributes that they were looking at. So those are the things that today are still, you know, not fully supported, or well thought through. So it's certainly being able to understand exactly what's happening and get ahead of it is really the the other big challenge when you have models operate in production.


Justin Grammens  12:04  

Yeah, for sure. And I think maybe people wouldn't even realize you might have hundreds of models running in production that That, to me sounds like a very, very difficult problem to solve.


Younes Amar  12:14  

Yep. One other challenge I'm going to bring up is the cost of running all of this, it gets really expensive. We've talked about GPUs and CPUs, and, you know, all the infrastructure that you need to, to run this, being able to run this at scale with with optimized infrastructure is something that, you know, it's another challenge, because, yeah, there's a lot of unplanned expenditures that comes that comes with that as you scale, you have to make sure that your cloud bill is not blowing up for lack of a better word there. That's the other challenge that most companies that skill, try to get ahead of,


Mohamed Sabri  12:48  

From my and what I see a lot as a change in ML ops, is related to the underestimation of the effort like that is needed in a machining operation. I mean, a lot of company, they visualize when they start doing AI, and ml, that all what they need is a bunch of data scientists, and it should be just fine. And they'll find a tool somewhere, maybe at a certain moment to support them. And then they realize actually, that they need proper infrastructure, the proper pipelines, they need access to data. And so that underestimation is not good, because it's hard after to manage people expectation, and it might make them put them in a situation where they're just like, hey, it's just too much. I think that's one of the problems. So that's why I spend a lot of time doing education around why is it important to have an ML ops practice in the data in AI practice? I'm not going to challenge which is more of a tech practical challenge is related to data availability and access to data. There are two things that we spent time doing in services a lot. It's either the CI CD, or the day making data available, like building up data pipelines, because there is needed during the development, the model is but it's needed as well in production. So those are basically the type of challenges we see in ML ops. In terms of AI in general. I think one of the problem that we face there are two problems that I see right now that I'm thinking the first one is managing people expectation in our AI is not a magic box that are just gonna plug you know in a wall and that's it will answer all your life question. And I see it a lot in chat bots use cases I have always alarmed when a customer is looking for a chatbot because I make sure that they understand the maturity of a chatbot today in the market. So with that we can manage the expectation. The second thing regarding AI is more the adoption, like Younes was saying, which is important because the other option from an end user perspective is actually critical. If you develop an amazing solution that is deployed and everything, and it is done to be used by users inside the company, if they don't decide to change their processes, and they keep doing the thing, that old habits and old ways, and they don't use your child, so your child becomes not much useful. And you know, you don't create really value. So spending always time for helping the adoption and make sure that the end user are using the solution. I think it's important, and the adoption in terms of mindset at a management or executive level. So the data, everything data related, and AI related becomes a priority for the company. Those are the different challenges that are will be that I can see right now.


Justin Grammens  15:51  

Those are great examples. Yeah, for sure. I think with any technology, there's always an over promise, or at least people that don't understand it well enough, they think they can use it across the board and use it on anything. The world of a consultant, I think, in some ways, is just resetting expectations is that you're saying makes a lot of sense, especially as you go into these organizations, and they're trying to adopt AI for maybe the first time. Yep. You mentioned credit card examples, I guess, what are there any other types of applications that either you guys are sort of working on where you see it being used in a sort of unique and different way? Or maybe even just anyway, I guess?


Younes Amar  16:24  

Yeah, in terms of use cases, there's a wide range of AI application seeking, you can look at it today, you know, the most popular ones, or, you know, when you look at E commerce, for example, built in optimized top of funnel for your recommendations and targeted based on attributes that are driving your your bottom line, which is retention, conversion, there's a lot of applications there where you start having like better recommendations, not not random experiments, they're all targeted. And then obviously, any recommendations, as you know, you start shopping for your products, when you get, you go to walmart.com, for example, a lot of the recommendations that you see there, they're certainly AI enabled. So that's a common application, when it comes to banking fraud. detection is one of the common with most common use cases, they're using AI on a credit card transaction, you can predict if it's going to be fraudulent, and therefore based on that prediction, you know, which action to take, the system will know which I think one of the most interesting use cases now that we're starting to see start to go towards manufacturing IoT, where you start having these, you know, smart machines, smart devices that are actually they have these lightweight ml models that are actually deployed inside the machine. And they're able to essentially consume that the data in real time and be able to somewhat dictate the behavior of the machine, or the device based on the data that they're consuming, and the output that the model was producing. So we're starting to see that. And in telecom, and even like consumer products and CPG, with, you know, smart razors, smart toothbrushes, things predict and if if someone's gonna get a cavity, from their, from their toothbrush, so there's a lot of fun use cases that are making our lives better. And obviously, going back to my previous experience with Tempus where now AI is also being leveraged to, to help basically come up with novel therapies for cancer or heart disease, those are really the things where we're starting to harness now the power of, you know, clinical data, genomic data to be able to come up with these powerful models that can help physicians and clinicians recommend the best therapy for patients, especially when we talk about a disease like cancer that is very personal, you can get the same diagnosis, but the human genome is crazy. So you have to know for personal disease like that, how to basically know what to target and what to what to recommend somebody therapeutic standpoint, to be able to come up with the best decision and recommendation. So a lot of applications there Can we can be here for hours naming done. So


Justin Grammens  19:05  

how do people get started on started on Wall road? You guys have like a freemium model or you know if somebody has an application that they want to bring in?


Younes Amar  19:12  

Yes, so we do have actually two offerings. We have the enterprise offering that basically our enterprise customers are using at scale, but we also have the community offering that is launching with general availability on June 27. So that's, that's a freemium offering that anyone can actually go and download can be installed on to three major clouds and people can actually start playing with it to see how you can deploy, manage and observe your models and your own pseudo production environment to basically understand the simplicity and how easy it is to take them all. So production, that's really the thing that we're vouching for is Yeah, machine learning is hard. Deployment of models doesn't have to be so that's, that's really the model.


Justin Grammens  19:54  

That's great. That's great. And yeah, I will be sure to have liner notes and all sorts of links off to both your guys's website and your platform and all that stuff to share with our listeners hear all the amazing stuff you guys are doing, Mom and I did mention in the intro that you know, you're an ambassador to ml ops Institute and also working at MIT as a data science mentor I, I teach here at a local university in Minnesota and the graduate programs, I teach an IoT and machine learning course. So it's funny when you were talking about, you know, sort of IoT and ml, kind of this area of tiny ml that's kind of like this new term that everyone's using. What sort of got you into that it sounds like you're very interested Mohammed in sort of training people in this in this space. What are what are you seeing, I guess, with regards to trends, new students coming out, people that you're working with, and sort of like the next generation of of AI expertise,


Mohamed Sabri  20:44  

I trained postgraduate, so it's usually professionals that are looking for certification and deep dive in ML and data science, a lot of curiosity, I think, and people sometimes and estimate the effort, they always complain that the mentee really is dance, but it's a dance program. And they usually work and they study as well. So they asked me, Muhammad, I probably already forgot what we did in week one, I usually tell them, No, the best thing you can do is just practice. Everything anyway, life is about, especially in a career is about practicing, I told them anyway, what we've seen, if you don't practice it in the next six months, you know, you just kind of vaguely remember what it is and what we did. So just somehow having like practical work done, and practice as much as you can, but a lot of interest anyway, for caddies, which in data science. Now we created mlcc today, so it will be training with us while studying certification over 12 weeks, but really dedicated to machinery in operation, which is not yet popular, but our bet is that it will become a popular job. So we're trying to build partnerships with universities, here in Canada and in us. So sometimes, you know, again, when you speak to people that are not familiar with the industry, like mL mL ops, pretty much the same thing. But it's definitely not. I mean, it's a thing to know how to develop a model. It's another thing to manage the lifecycle and be able to deploy models and be able to automate over the existing pipelines and lifecycle so yeah, but teaching is always good. I feel I'm bringing value, I look at the statistic because the program I teach they come they compile the statistics for us. I think I impacted over the since the beginning, which is over the last year. And so I handled learner, which is good 100 learners that I know will be using what I was teaching them practically in their companies or their personal project, which is good. Yeah, when you teach, you have to be patient too. Because if you think that it's simple, don't expect the order to absorb it as if it was simple. And a lot of repeating, I have to repeat myself so tight, that's fine. So part of the job, I think 


Younes Amar  23:09  

it's one of the it's one of my weaknesses, actually is that I like patient. So when people actually asked me, mL mL ops AI, my answer is, you know, MLS is how you turn your ML into AI, if you want to learn more go talk to Mohammed


Justin Grammens  23:25  

like that, how you turn your ML and AI, I think you're onto something there Muhammad, I guess with regards to, like, you know, just viewing this as a skill set that everyone's going to need to learn I, I think about DevOps, I feel like maybe DevOps is further along in the curve. But you know, as you start deploying software to the cloud, there's a whole bunch of infrastructure that needs to happen. And there's a number of different people that come together to build a software product, cloud enabled products, everything from Product Management. I mean, I think you guys know that, you know, and of course, you know, software engineers and app developers and QA and, and then system administrators, I mean, just everything that I think of when from the early days of the Internet, building the software now it needs to be operationalized. And so I feel like maybe machine learning is still in its infancy. But it's definitely going to be something that more and more companies are going to need to hire for.


Younes Amar  24:16  

It's going through the same revolution and the lops went through with the emergence of the cloud. It's really the same thing to happen with software development is now happening with machine learning. We're in the early stages we're starting to see it yeah,


Justin Grammens  24:29  

I'll biggest are both your guyses companies


Mohamed Sabri  24:31  

by side I'm around like 10 people today


Younes Amar  24:35  

and we are about 40 people projected to be around 50 sometime this year.


Justin Grammens  24:41  

Nice nice okay well cool well what's what's a day in the life I guess of you guys?


Mohamed Sabri  24:46  

Busy busy


Justin Grammens  24:49  

are you are you down in the code? Well, I'm and most the time you programming a lot. Are you trying to run the organization at a strategic level?


Mohamed Sabri  24:56  

I have the programming team a team around SSL, you have the time anymore to program anything, because I'm trying to have best practices in my teams in terms of engineering development. And so, so sometimes I usually follow up with what everybody is doing. I like doing debugging. I'm the chief debugging officer, something in case something's wrong, I usually have a 40 minute 45 minute call with the engineer and try to fix fix the code problem. So yeah, but mainly running the running the company, doing conferences, meeting customers with strategy and managing the team.


Justin Grammens  25:37  

Gotcha, gotcha. 


Younes Amar  25:39  

Yeah, for me, it's basically from shaping the vision and the strategy of the company and the product to basically sometimes really being being in a Jupyter Notebook testing the, you know, the Waller SDK, because, you know, we were a product led organization. So the product is our assets. So we're all empowered to, to know how it works, and basically provide feedback on it. And not just, you know, the the engineers building it, but also the customer teams, the product team, or the sales teams. So yeah, it's really a wild rodeo, and a wide range of activities for one day. But it's a lot of fun. That's what comes with, you know, working for a series a startup, but it's it's rewarding. It's high risk, but also high reward.


Justin Grammens  26:24  

That's great. My organization here we have about 25 people, I really like sort of small boutique shops, right? That's kind of like what we do. I think we're probably similar to Muhammad in that, yeah, we help companies use AI. But also we build a lot of software, mobile applications, cloud apps, but it's fun to be small and nimble, I feel like because then you can actually get a chance to wear a lot of different hats, you're not really pigeon holed into a specific thing. And so you know, being in a fast paced startup product company, like, like you are wallaroo that that sounds super exciting. And but then also being able to work with a small team and solve problems for customers. Like you're doing Muhammad, it sounds awesome, too, as well. How would you advise either you guys advise somebody to get into the space? I'm a recent college graduate just coming out of school here? Where are some things or some resources that either of you would suggest whether it be out on people on Twitter? Obviously, there's a book Mohamed that you wrote, you know, as well, but yeah, we're, I guess, either, you could sort of like jump in and answer the question first, but where do you suggest people start?


Younes Amar  27:23  

I think that's Muhammad's favorite question. So let's see.


Justin Grammens  27:28  

I think whatever path you take, are most important thing is practical experience. So get the job at a certain moment in the field, because that's the hardest part, especially not, not in all the path. But at the beginning, getting in the field, getting your first experience is harder. Just because what is happening today in the in the hiring market, is that it's like a pyramid where, because it gained popularity, there are so many new entry level junior people that want to apply, versus very few senior, just because it's a field again, love popularity in the last 10 years. And from an author perspective, the pyramid is reverse, meaning that the companies they tend to look more for senior people, even if they are of course entry level and junior position in data science, or even ml or ml Ops is just that companies are more craving for intermediate seniority to a certain extent, that would that even impacted the number of years required, in average, to call someone senior in the field. So yeah, that's basically the market situation, meaning that the person should try to learn either from the internet or either through internships or to certifications, and then do whatever he can or she can to get a job right away. Because that moment, when you start looking for a job, and you get one, you gotta fight for it, you know, you're gonna struggle and it's hard. It's very competitive. So then, after when you start having experience in the field, it's very easy to switch. No, because you get contacted people, they want to hire you, et cetera, et cetera, et cetera. But getting the first experience is always the hardest. 


Younes Amar  29:23  

From my perspective, because I work in product and this field has notoriously been very technical, machine learning AI data science. It's it has been somewhat exclusively technical and focused on developing models mining data, it can be sometimes intimidating to people and product as they might feel like they're not needed because they come in and work with data scientists and engineers who are extremely talented experts in the field. And they get the illusion sometimes that they're not needed in this field. But one thing I'm actually participate Dayna in a program right now called illuminate AI, which focuses on a lot of topics, but my topic is around products in AI. How can you become a product manager or a product professional in AI, the one advice that I'm that I'm given there is, you don't need to be an expert data scientists are an expert in machine learning, to be able to, to get into the field, all you need to be is really be curious about, about the field and understand the impact that AI can can make. And also, as you understand the life cycles and the stages that data goes through from being just from its raw form, all the way to being, you know, artificial intelligence, if you understand that lifecycle, you'll be able to understand exactly who gets involved when and what they were doing. And at that point, what you end up focusing on is really, the tail end, which is you build AI enabled products, and you focus on delivering the best experience to users using those products, or you focus on building products that help streamline the process of developing and operationalizing AI, like myself, for example, I'm working on building a product that streamlines operationalizing AI, and machine learning. So my goal is focusing on assuming I don't understand anything, I can still go and define a good product experience that a data scientist or machine learning engineer would benefit from just by listening to them. And that's really the the other trait that I would recommend looking for, if you're hiring for product role in AI is, you know, be curious to be open minded, try to listen to your audience, you're building a product for two years or not for yourself. And doesn't matter what background you have, you can be from marketing, from business from engineering, if you have those traits, and you're, you know, willing to understand the pain points, and you know, the the outcomes that you're driving, you will be very successful in this field. But it doesn't have to be any different from building an E commerce platform or mobile app or anything. You know, you're still solving a user problem at the end of the day.


Justin Grammens  32:01  

That's awesome. No, I love that I love that I love you said you don't need to be an expert, because I think I think none of us are, that's what's so cool. I think about this whole field and new technology that's in this emerging space is everyone's picking it up. And at some point, all of us were picking up that, you know, book on machine learning. But at the end of the day, we're just experimenting, I think around and that's why I started this podcast really was to just talk to people and learn. And I've been having a ball doing it. And so I'm by no means an expert in any of this stuff. So it's really good to have that open mindset as you come in and tackle these problems. Right, great. How do people reach out to us Muhammad? Is it test on LinkedIn? Or?


Mohamed Sabri  32:38  

Yeah, I think LinkedIn is cool, or email as well. That's usually the way I communicate. I'm only to channel


Younes Amar  32:44  

saying here, LinkedIn, primarily, you can reach out to me directly, you can follow me you can, you know, send me a connection request.


Justin Grammens  32:51  

Cool. Well, I'll be sure to put your LinkedIn profiles, then as links for people to reach out. Is there any other I guess, you know, areas of ml ops, I guess that maybe I didn't touch on that you guys wanted to share as well?


Mohamed Sabri  33:02  

Not necessarily. I mean, there are a lot of things to say. But I think we covered some of the interesting aspects, like the challenges and so and how to build a career. And so in the field, which is pretty interesting.


Younes Amar  33:16  

Now, I don't think I have anything else. Oh, man.


Justin Grammens  33:18  

Sounds good. Sounds good. Well, I appreciate your time for both you guys. I think it was really interesting to hear sort of a product side, and then probably an entrepreneur startup services business side of it. And, you know, it'd be awesome to come back, I think, in another 12 months, maybe, and see how the field has evolved and changed. But I think it's a it's a growing area. And I think it's a growing pain point that more and more businesses probably haven't really thought through, everyone says, Well, it's great. I have this model, I've, you know, I've run the stupid or notebook, right, I have some data now I have a model. That's, that's there. So basically, just push a button, and now it's in production, right? It's like, No, you know, there's there's a lot of components and a lot of things, I think around business process that need to be factored in, and then everything around, you had to reach, like, retrain that model in the future. Do either you guys work with any, any stuff before, like, close out here. I mean, there's there's a lot of sometimes there's a human in the loop, you know, aspect to some of these things, I think that some companies need to delve into either of you guys, you know, had to had to work with companies and and sort of bring that capability into situations.


Younes Amar  34:22  

In what context? Exactly, just to make sure I understand. Yeah,


Justin Grammens  34:26  

well, I was just thinking about models that drift, right, for example. And, and so some of them could basically be drifted because you don't have enough data. But you also maybe need some human, some human characteristics, I guess, some additional tagging of features, you know, so this is, this is more something that actually needs to be a supervised learning model. But you might need to deploy a company might need to deploy X number of people workers in some ways to actually update the model. So, you know, how does that factor in to the entire process of as you're building out the operations?


Younes Amar  34:57  

Yeah, that is actually the one of the main things We strive for and we at least are while we're when we talk about model monitoring and model observability. As you scale, does it mean that you have to scale the team with it and the number of workers that will have to actually be involved to catch things? So the short answer is no, you have 1000 models business, you don't need a team of 500 data scientists. But there are some some techniques that that are out there where you can actually put in the necessary validation checks into into product. And you know, as a data scientist, you're you're the one who's intimately familiar with the models, you know exactly where to where the holes are, you check for dose, your definition of drift is an from my perspective, up to the data scientists to define. And there's just one, one side of the equation, the other side of the equation is really the business analytics. So the two will have to come together and be able to work together to data scientists can see that sometimes the model is not doing anything long, but the business is still not reached end, the metrics or the objectives that had been set out to achieve via that investment AI. So that's, those are the gaps by essentially having the ability to quickly translate model insights into business insights will certainly help alleviate a lot of concerns and bring in the the other side of the business being, you know, the business analysts or, you know, the business intelligence side into understanding directly what's happening with with AI when it's, you know, running it production, or ml, when it's running production, be able to, to take action, or at least correlate the insights from machine learning to business metrics. So that's really the one of the challenges that every company today would be talking about, primarily because, again, the field is new. And this notion of bottle monitoring is still a blind spot. And monitoring models and models, as I mentioned, is the starting point. Understanding the impact on the business is really where it's at. So there's there's a lot to unpack there a lot to do.


Justin Grammens  37:04  

For sure, for sure. Well, excellent, guys. I appreciate your time today. And yeah, I look forward to keeping in touch Eunice and Mohammed in the future and best of luck to both of you on your organization's


Younes Amar  37:15  

Absolutely. Thanks so much for having us.


AI Announcer  37:18  

You've listened to another episode of the conversations on applied AI podcast. We hope you are eager to learn more about applying artificial intelligence and deep learning within your organization. You can visit us at applied ai.mn To keep up to date on our events and connect with our amazing community. Please don't hesitate to reach out to Justin at applied ai.mn If you are interested in participating in a future episode. Thank you for listening