Career Diaries by Elemed
Career Diaries by Elemed
What 86% of Leaders Are Getting Wrong with AI — and How to Fix It | Michelle Wu | NyquistAI
With a background spanning Boston Consulting Group, Novartis and Silicon Valley, Michelle Wu is a leader at the forefront of AI in MedTech. Today, she’s the co-founder and CEO of NyquistAI and recognised as one of the top 100 women in AI globally.
Michelle believes that successful AI adoption has far less to do with the technology, and far more to do with people. Mindset, culture and clarity matter more than tools - and the companies who get this right will move fastest.
With a career shaped by data, strategy and innovation, Michelle offers an honest look at what AI can (and can’t) do for regulatory affairs, and how leaders can turn ambition into real impact.
Tune in for:
- Why 86% of leaders are struggling with AI
- The three factors every company needs for real adoption
- How to avoid AI fatigue and overload
- What AI will really change in regulatory careers
- Why mindset, structure and data matter more than hype
And much more!
Want to build real GenAI capability in RA?
Apply for elemed’s AI Accelerator 2026: a focused programme where RA leaders pilot one workflow inside real work and measure the impact.
Request access 👉 https://www.elemed.eu/request-access-ai-accelerator/#request-form
Requests are reviewed and seats are limited.
Career inspiration, medtech opportunities, hiring solutions and market insights, all in one place. Find them here.
00:00:02.800 Guys, I'm really excited because this is a live version of Career Diaries by LMED. I'm with the amazing Michelle Woo
00:00:09.679 and we're going to talk about AI. So, let's get into it. Michelle, tell us a little bit more about who you are and what you do. Thank you so much for having me. Hello
00:00:15.920 everyone. I'm Michelle Wu. I'm the CEO and co-founder of Nyquist AI, third time founder and I have always been a tech
00:00:23.039 nerd and data nerd. My passion is in life science ever since my very first job. I started my career with Boston
00:00:30.320 Consulting Group as a strategy consultant talent capital. I consult on like global strategy had amazing time
00:00:37.920 and then I have this ambition or curiosity about the industry. So I joined Novartis. I worked in Switzerland
00:00:44.879 on the very first and so far the only asset swap deal in the history of pharma the three-way merger and acquisition
00:00:51.360 among Eli Lini GSK and Novartis. And then I was like, "Oh my god, so much data." And I have to like stay up all
00:00:59.120 night crunching numbers and reading through the PDFs and somewhere along the way I was like there got to be a smarter
00:01:05.600 way. So I have this idea the vision to have a smarter way to sift through insights through like a life science
00:01:11.920 data because I feel like life science data life science industry is a data
00:01:17.040 industry clinical trial data and even talent data. That's where the seed was planted. I came to the US for my NBA.
00:01:24.400 Got my MBA at Stanford uh and accidentally drink the Kool-Aid and got
00:01:29.600 the bug for starting my company. I started a social media app during my
00:01:34.720 first year at MBA and didn't make a lot of money and also didn't lose a lot of money. I got merged
00:01:41.040 with another big social media app. We know it's in the dating space. The human element is really really exciting. The
00:01:48.399 second one is I raised it's a typical Silicon Valley story like you know just like this coffee chat 15 minutes I got a
00:01:55.759 term sheet of a million dollars and I wow went on raising more but back then I was
00:02:01.680 a terrible founder terrible manager I was smart but way too arrogant so I run
00:02:07.040 that company to the ground that experience really humbled me and taught me so much about myself and then my team
00:02:14.160 hasn't give up on me and they made some introductions I met my current tech co-founder. I went back to my roots in
00:02:20.239 life science and worked with my amazing co-founder. Look him up on LinkedIn. He's a tech genius. We worked on the
00:02:27.280 idea for life science data and how we make the industry more efficient and more. So we've been around for five
00:02:34.080 years. We survived the five years a small team about 200 companies 100
00:02:39.519 customers global. So yeah, super exciting time. And recently I've been
00:02:45.200 recognized as the top 100 women in AI. Yeah. Globally. So that was super cool.
00:02:51.760 The first woman is my Stanford professor Vibil Lee and the second one is the former CEO of Open AI. I was very
00:02:58.720 honored to make that list. I always love to talk with people and nerd out on
00:03:04.319 innovative stuff. So I published a book chapter on AI for healthcare. Okay.
00:03:10.000 And also got a couple of articles with Forbes and the top medical device and
00:03:16.480 life science journals. Amazing. So you're definitely the right person to be speaking to when it comes to AI. For sure.
00:03:22.000 I'm a student. I'm a lifelong student. Constantly learning. So your company's been going for 5 years. You've been in
00:03:27.599 this space quite early on. What are some of the big trends that you're seeing right now? Yeah. So much is happening and even
00:03:33.840 though it's five years it feels like a very long time and feels like we're just at the beginning scratching the surface
00:03:39.760 of the the like the change the disruption or also the innovative power
00:03:46.159 of artificial intelligence. Five years ago when we started out we're using machine learning and AI to screen all
00:03:53.599 those like kung cultural data FDA data to make regulatory professional more
00:03:58.799 efficient and then like when I go pitch to companies they're like oh we are AI company and they're like what AI stand
00:04:05.920 for depends on you know who we talk with if it's a pharma companies like is it
00:04:11.040 API active pharmaceutical ingredient or like if it's medical device they'll be
00:04:16.720 like is is like it's like aotic implant. They they they have no idea what AI is.
00:04:22.479 You know those wiki smart MD and PhDs. So we're like no we are a database.
00:04:27.520 We're like a Bloomberg or Google. We are search engine for all those critical insights for you to file your FDA
00:04:34.960 submissions of 510K or do you see and do your clinical trial protocols. I feel
00:04:41.360 like from 2020 to 2022 like during the hypes of co but it's most it's also the
00:04:49.040 most innovative and most you know freethinking face for our industry. So
00:04:55.280 when it comes to AI, people have this naive but also you know I'm going to try
00:05:00.880 it. that phase is like allowing AI phase like all the we've seen the news the
00:05:06.800 leadership from like Johnson and Johnson they're like oh we're allowing AI and then when co finished like we
00:05:14.080 finally came out the co I see the second phase of you know like big trends where
00:05:19.199 a lot of companies large enterprise customers at the top 10 global medical device companies like oh we need to work
00:05:26.320 with open AI we need to work with Microsoft copilot they are all amazing
00:05:32.080 companies. Big tech. I'm from Silicon Valley. All my friends work for Google and Microsoft. They have a grand vision.
00:05:38.560 They over estimate the power of AI's immediate ROI. Okay.
00:05:44.960 And underestimate what is, you know, the huge undertake of time, education, and
00:05:52.080 change management. Since you are experts of talent, you know, change takes time. We went to the second phase of buyer
00:05:59.280 remorse. is not owing let's pause why we haven't seen the return of AI there's a
00:06:05.360 couple of very interesting articles I can embed in this video about like the research from MIT and Tropic interviewed
00:06:13.600 80% of their enterprise clients two years later it's like almost like 80% of
00:06:18.800 them say like oh we haven't seen the return as we expected yet now we're entering
00:06:24.080 Can I ask you why do you think that they that they haven't seen the return So
00:06:29.520 that's a great question. When it comes to life science, when you design a clinical trial, you know, having a
00:06:35.759 endpoint, even though it's, you know, makeup or artificial endpoint is already 50% of success because having a goal is
00:06:42.880 super important. Yeah. And regardless what goal is but a lot of people a lot
00:06:48.000 of enterprise like we have 200 customers we've seen like they come to this AI
00:06:54.319 fantasy lens like it's like okay you know it is going to be magic and they haven't quite set the goals like okay
00:07:01.840 what's our expectation what success looks like with the timeline and milestone they're like oh let's throw
00:07:08.160 money at those tools and see what it comes out and to start with they don't even have a benchmark or goal in mind.
00:07:15.039 And second is like a lot of companies like especially large enterprise they
00:07:21.039 they have they have a lot of confidence and beliefs in their own data and they
00:07:26.400 think their internal cultural data is so rich and so valuable which is potentially true but at the same time
00:07:33.680 they may not have the infrastructure or talent to make sure the data is clean
00:07:39.280 data is meaningful. So when there's like oh we have this pile of clinical trial data for a former FDA submission any
00:07:45.759 foundation model on top of it it's like cooking the ingredient is good and like whatever come out of it must be good but
00:07:52.560 it's not it's almost like garbage in garbage out so and it's also like a lot
00:07:58.960 of leadership team are are not like satisfied the with the result and three
00:08:05.759 is the whole whole organizational change that need to happen to truly unlock the
00:08:11.919 AI potential. a lot of the leaders like okay we're owing AI like you know AI is going
00:08:17.680 to be great and I went to JP Morgan healthcare every year and I have a lot
00:08:22.720 of executives who are my mentors or friends who's like hey I was like hey
00:08:27.759 John or Mary like you you just got this amazing interview congratulations on features on Forbes you said you owing AI
00:08:34.320 but what what does it mean what does it mean which function you know it sounds great though
00:08:39.760 yeah what's your budget what's your ideal ROI I said like my team will figure it out. Yeah. Yeah. So they have this
00:08:47.200 grand goal in mind and then like the the people under the steer was like
00:08:53.920 wow gosh like what does it mean like they're scrambling to figure it out to really satisfy the leadership rather
00:09:00.560 than get the insights get the winning case from bottom up. So if the change
00:09:06.560 happens top down out a lot of clarity it's like you know giving my old data
00:09:11.680 BCG as a strategic management strategic consultants like it's a recipe for
00:09:17.200 organizational disaster. Why? Because if everything happens from the
00:09:22.560 top without clear guidance and leadership then the teams is there there's a lot of disconnect
00:09:29.360 in communication in vision like in execution. If a company is like I want
00:09:35.920 to do something with AI, I know it's important. Everyone's talking about it. We're all in when it comes to AI. What's your advice in terms of like how should
00:09:42.240 they move forwards from that? Yeah, that's a great question and technology is moving so fast like u my
00:09:48.640 answer today may be very different from my answer will be like in three months or six months.
00:09:55.680 So like I think there is three things. First to make sure like the data infrastructure is there for any company
00:10:01.600 you need to have the proper training set to make sure that whether it's in
00:10:06.720 regulatory fair or for clinical or medical you have the specific data to
00:10:12.480 power your tasks. Not all AI is created the same. As a a tech nerd from Silicon
00:10:18.480 Valley, I'm a big fan of superhero movies. It's like with great power like Spider-Man said, with great power comes
00:10:25.200 great responsibility and with great data comes great AI and not all AI are
00:10:30.560 created equal like leadership to really need to understand the data infrastructure. So number one to that's
00:10:37.040 the number one success factor to win in the you know owing AI strategy and
00:10:42.720 number two is to have the right organization structures. If you know you
00:10:48.320 you're a mom, if you tell your kids like you know go do something like they may not do it, you need to really have the
00:10:55.120 systematic organizational change to have the right incentives and you know the
00:11:00.399 the right organization structure either to set up a center of excellence for AI
00:11:06.480 or have a pure article sharing and periodical like market research like
00:11:11.760 what's the new technology out there like how we are using it tracking the user activity ities internally track the the
00:11:18.560 success stories internally. So like to have all the
00:11:23.839 organizational like incentives like the right organization structure for the right AI is is the second key success
00:11:32.160 factor to win in the AI race. And number three is the shift of mindset. I'm really into
00:11:39.920 you know mind game or brain hacking and people like with there's a lot of
00:11:45.920 research on bio hacking but I think recently there's a lot of cool books coming out that if we can unlock our
00:11:52.800 brain's potential our heart's potential there's so much more to achieve so really distill the culture of like
00:12:01.440 encourage people to try and it's okay to make mistakes previously is not big no no and coming from a very traditional
00:12:08.240 pharmaceutical industry and have that dynamic open culture to really light in
00:12:14.639 all those mini use cases to thrive to have a room to breathe and experiment is
00:12:20.000 very very critical. of right data, right organization, right culture to make sure
00:12:26.079 the AI went. So then drilling down more into that. So like how would you know if you have good
00:12:32.320 data? Yeah, this is a fantastic question. So
00:12:37.440 three things. First, you look at the data scope. Second, you look at the quality of the data, whether it's
00:12:43.600 traceable and whether you can verify it. And three, you look at how the data is linked, where machine learning is built.
00:12:50.480 I know it sounds very theoretical and so I can take a example like a regulatory intelligence for medical device
00:12:57.279 companies now they are in ever changing you know fast-paced environment with tariffs they need to look at the global
00:13:04.160 market and with all those different innovations happening around the world
00:13:09.200 surgical robotics artificial heart no one will have the perfect knowledge for everything this is where AI come in so
00:13:16.959 like first for like a regulatory intelligence you to make sure the data scope is large enough that you can cover
00:13:24.160 all the critical geography and critical registry bodies insights, guidance,
00:13:29.680 standards, the competitor intelligence like a product information so on so forth. And number two is the quality of
00:13:37.600 the data. It has to be traceable and it to has to be able to verify. A lot of
00:13:43.360 the challenge we seen from our clients and also we experience ourselves. How do
00:13:49.440 we build trust in the tool AI tools that we are using? How do we verify? In our
00:13:55.760 product design, we always embed the original source. Here's the result. Here's the summary about this AI. But if
00:14:02.560 you want to check more, here's all the highlight quotes and sources for people
00:14:07.680 like regulatory professionals, they are very cautious, rightly so, so that they can verify. And three is like you need
00:14:15.040 to be able to explain the linkage. How does AI come to search for insights? For
00:14:21.519 example, uh a top company was looking for competitors for pediatric use of
00:14:28.320 certain products and our search service the term adolescent under the age of 14.
00:14:35.680 You need to be able to make that data linkage. This is where AI is like come
00:14:41.279 to play. So if a company doesn't have good data, can they borrow somebody else's? I think a data is gold. AI will very soon become
00:14:48.560 a commodity like I said that two two years ago when open AI is coming out and
00:14:54.000 there is like you know other players character AI like um coherence AI and
00:15:00.880 anthropic and Amazon but very quickly we've seen like uh you know like the the
00:15:06.720 winners of foundational model is becoming very very clear and foundation model is so general they are great to
00:15:15.519 help you you know rewrite email or like you know build on other content but for
00:15:21.360 life science especially regulatory affairs they lack the training data. If you can take one thing from our
00:15:26.800 conversation is training data training data training data you know the the training data set of regulatory
00:15:32.079 intelligence is missing for all those general foundation models. They really need that specificity to make sure AI is
00:15:40.240 useful for our professionals. So, say I'm a VP of regulatory affairs and my CEO has asked me the question, what are
00:15:46.480 we doing when it comes to AI and I know that I should be doing something but I'm not really doing anything right now.
00:15:51.519 What would you say is the one thing that I should do to start? Yeah, that's a great question. It's a
00:15:58.320 challenging environment right now that our whole industry globally is facing some pressure like cost pressure and you
00:16:04.800 know budget cut freeze but they are also tasked with
00:16:09.839 this grand audacious goals to produce more and have more submissions that's
00:16:15.040 definitely a challenge the quick one we have seen is like a regulatory intelligence okay um so they can
00:16:21.519 leverage AI to look at for all the competitors and all the global market
00:16:26.639 maybe there is a product sector they haven't thought about. Maybe there's a breakthrough designation or faster
00:16:33.040 approval that they were not aware of previously regulatory affair has been highly
00:16:40.720 regarded because you know like the people's tribal knowledge or institutional knowledge and AI is
00:16:48.720 amazing in democratize the tribal knowledge to power innovation. So if the
00:16:53.839 VP of regulatory affairs want to get further or more advanced in the
00:16:58.959 decision- making process which we have seen from some of our users they can leverage regulatory intelligence to
00:17:04.959 quickly look at how to come up with an answer to the board say hey should we go
00:17:10.640 to Europe or should we like launch the second part of second indication in the US and they the regulatory affair VPs
00:17:18.079 are faced with visionary or crazy or random questions every single whole day
00:17:23.599 from like chief of clinical trial officers, clinical trial operation like BP of commercial BD or like oh should we
00:17:31.600 acquire another assets then they compare with traditionally takes them two days
00:17:37.280 to review everything even a cross functional discussion they can come up with a quick answer to get the
00:17:44.240 discussion going. they are moving from the backend office as their traditional position to be further advanced in the
00:17:52.240 decision- making chain. So it actually can give you a competitive advantage because you can use it to unlock new markets. Yeah. And
00:17:59.280 there is your use case for ROI, right? If you can open a new market and it was because that AI helped you identify that
00:18:04.480 and you got that moving quickly. Ultimately leadership are driven by bottom line. That's really what they're interested to know about. So it elevates
00:18:11.039 regulatory affairs to be more of a strategic business partner rather than just a person that has a technical
00:18:16.160 information. Yeah. So that's really interesting. That's really true. Traditionally regulatory affair has been viewed as a
00:18:21.840 cost center or no person. But here is like a regulatory is like could be you know revenue generating function by
00:18:28.880 saying like hey have we thought about this new area or this is the easy add-on indication we can file for another 510k
00:18:36.000 so on so forth. So over the last 5 years, you're working with companies that have adopted AI and there was this
00:18:42.400 real energy at the beginning and then it tailed off. What are some of the struggles that you're seeing with companies?
00:18:47.600 So like that's a loaded question. Companies are really like people and you
00:18:52.880 deal with talents and executive all the time. People will only change in four seasons. First, it hurts so much that
00:18:59.919 people have to change. Second, they've learned enough so that they are ready to change. Number three, they've been
00:19:05.520 offered so much so they have the time to give to to to change. And three is like they are just very proactive to change.
00:19:12.400 Yeah. So we've seen only actually 20% of our enterprise clients
00:19:19.039 who are using our solutions like 20% of the early adopters who are just like you
00:19:25.440 very innovative always out there to search for new ways to advance their
00:19:30.640 careers. before last night I was doing some homework for our podcast like LinkedIn have research that they've seen
00:19:38.880 even though like hiring has been slowing down and uh uh people who normally have
00:19:44.240 people who have AI literacy know how to use AI not necessarily you know need to
00:19:49.919 code out or build out AI but really know how to harness AI and leverage them in
00:19:54.960 their work have reported 68% of increase either on
00:20:01.440 like a title on their role or on their like salary. Interesting. Yeah, we have seen that too. It's almost
00:20:08.240 like a guaranteed thing. We look at the user activities. We are super cautious about data privacy and data security. We
00:20:15.120 don't track anything of of our users. We don't train on our user datas. But we do
00:20:20.400 look at just numbers of activities. It's almost guaranteed that if we see a top active users six to 18 months down the
00:20:28.320 road, I open up my link and they got a promotion. Oh, really? Yeah. I think it's because we're really seeing
00:20:33.679 that companies want people. So, there's this feeling of we need to do something. Yeah. Um, so who who knows how we can do it,
00:20:42.880 right? And the truth is that nobody really is an expert because it's constantly changing. It's constantly a work in progress, right? There's not an
00:20:48.480 end goal of I do a certificate now I'm an expert now I know what I'm doing and I can move forward. It's this concept of
00:20:54.000 almost like the infinite game by Simon Syninek, right? It's just like you're constantly improving and evolving and
00:20:59.200 it's about staying in the game and like constantly learning how to use it and and adapting. So if you get in the game
00:21:05.520 and you start playing the game all of a sudden what you're more desirable as a candidate, right? Because you have that
00:21:11.520 skill set and more importantly that practical experience and so for companies that becomes invaluable. So
00:21:16.960 that's super interesting. What are some of the other struggles that you're seeing? Other struggles like when like companies
00:21:23.760 and evaluating the ROI of the tools for example like a efficiency gain or
00:21:28.799 revenue generated they don't have a baseline to start with. How important. So they over complicate the whole AI
00:21:35.760 decision making process. We've have seen you know big guys hiring the BCGs and
00:21:41.280 McKenzie of the world like okay help us map out the their their AI road map. So
00:21:46.880 I come from BCG like you know one ass from DCG is like the the starting ticket
00:21:52.559 is like at least 1 million. So they hire those strategic consultants to spend
00:21:57.840 like you know eight weeks or like it depends on the scope like eight weeks
00:22:03.520 minimum to help the leadership map out where are the AI road map and you know
00:22:08.960 where they could see winning. Mhm. I think the excise is very important but
00:22:14.480 maybe does not need to be you know you know a million dollars like a fancy consulting strategy consulting firms or
00:22:21.679 eight weeks because guess what in eight weeks like the AI fundamental is different and the use case could be very
00:22:28.880 different all the companies that have like especially the large ones they have this fantasy of like I need one tool
00:22:36.960 that's it's all like I have this dream love of my life you know he he like this
00:22:42.559 person will have all the good qualities I I want at the same time you know fits
00:22:48.080 all my needs that person does not exist in real life so uh when it comes to AI tools no one
00:22:55.120 AI tool is going to fit all the requirement and all this all this like task so like having I think a companies
00:23:02.799 like having the broad understanding of like okay here's like a rough idea where are we going and
00:23:09.679 allow different function and different task to start experiment like have the small one and quick one is is very
00:23:16.720 critical because when they know Senov has launched like this companywide AI tool
00:23:24.080 and it was the talk of the industry two years ago now uh their IT pull out the
00:23:30.640 you know every months they see the activity report pretty low compared with what they envision like everyone would
00:23:37.600 use it and the Microsoft to cop Pilot also feel some adoption challenges with
00:23:42.799 enterprise. There is large law firm is a public information and news like they give up $1 million as a bonus for any
00:23:52.240 lawyers to come up with the most prompts. But the number of prompts does not equal the quality of prompts or the
00:23:59.840 actual efficiency gain. So I think companies are replicating their
00:24:05.120 decisions and having the wrong KPI or wrong incentives to really drive AI
00:24:11.360 adoption. So it's really about offering opportunities for people in your team to experiment. Give me some examples of experiments that people could do to get
00:24:18.159 started. Yeah. Yeah. Yeah. Yeah. So like we have this amazing client. They are not our
00:24:23.279 top revenue client but we learned so much from them and they are constantly growing. One thing I think like there is
00:24:29.600 a couple things that they did right. So first the decisions come from the leadership but it's not like you know we
00:24:35.600 really need to owe AI. It's like hey guys this is for you. It's not for the organization. You could leverage this
00:24:42.559 opportunity from the organization to advance your career. It's almost like the company's paying you to be more
00:24:47.919 capable in your job. Number one is the empowerment from the leadership team. Number two is they have the guard rails
00:24:55.120 in place like a but limited bound. It's almost like okay children here's the
00:25:00.480 playground anything you want to do if you want to play with the slides or swing or like sandbox here is your safe
00:25:07.279 play. We worked with our security team, data privacy team, IT team, business
00:25:12.880 team and finance team early on to have a framework to say like okay what's our
00:25:18.640 cost and timeline and what are the things they can use or cannot use to set the framework and three is you know
00:25:25.679 provide access to people for them to try out have a healthy collaborative but
00:25:31.840 also competitive environments like okay today you know John have amazing use
00:25:36.960 case to share and next week Mary has an amazing use case to share. So that has been like very very helpful. What has
00:25:44.240 been not working? Yeah. Is like I've heard like a companies I say okay we've already paid open AI we
00:25:51.440 have this companywide like GPT and like you know medical device GPT or something
00:25:57.600 and we need to use it but they are not giving people the right incentives or training or tools. is almost like doing
00:26:05.200 a homework that people hate. So instead of showing deficiency or empowering
00:26:11.039 people, there's like oh my god there's another things I had to do I had to deal with to satisfy my boss is a big no no
00:26:18.480 for drive AI adoption. Why do you think there's this resistance from people at a tactical level to use
00:26:25.039 something that's going to help them move their career forward, make them more efficient? Like why would they not want to use that? The first is the lack of
00:26:31.520 education on boarding and training. And uh now almost everyone we know like know
00:26:37.679 how to use Google, how to you know ask a question or like generate keywords. But
00:26:43.520 Google has been around for 25 and 26 years. When we were children before our
00:26:48.799 generation, people were just memorizing the URL code the lines and go to you
00:26:54.480 know Yahoo or other other website like you know read pages to pages and come up
00:27:00.720 with their own understanding. So Google actually Google likes to spend
00:27:05.840 like a good amount of three decades to train people how to come up with a question. So now it's almost like a
00:27:12.559 second nature. We just go to Google and search like hey what is a good coffee shop around here you know like what's
00:27:19.520 the topics of raps you know what are the cool AI trends so on so forth and when it comes to AI take a prop engineer as
00:27:27.760 example life science people regulatory people are not used to come up with a prop it
00:27:34.640 sounds so intuitive but people need to have you know what's the recipe what um
00:27:40.480 what are the structure what's is the framework that I can ask and and each two have their advantage and limitations
00:27:47.760 and what we have seen is it team or the business team leadership team out of
00:27:54.159 their good heart rush to get a vendor and bring the vendor in then people's like why we're already very busy like
00:28:00.960 you know we are just like and we get the you know the login the password and then
00:28:06.880 like nobody you know come to to tell us anything nobody is like hey how are you
00:28:12.399 using it? What are the use case? Oh, in your world, have you tried this and have you tried that? So, there's a long way
00:28:18.799 to go for training and on boarding. The number one missing element is training on boarding. Number two is the lack of
00:28:26.000 incentives in place and people are very busy in regulatory affair. They are constantly have to deal with change of
00:28:31.840 like different regulatory bodies, cross functional education and also
00:28:37.200 safeguards like protect the patients and protect the company and make sure the company doing the right thing under the
00:28:44.559 you know these are the incentives or aspiration to satisfy investors and satisfy their shareholders. So it's a
00:28:51.760 already a very complex job. So like not having the right incentives just like
00:28:58.559 you have to do it every day you need to open Microsoft and do one prop. So people will ask like why what's in it
00:29:05.600 for what is in it for my work. Yeah. So like the second failure reason is
00:29:11.279 like lack the proper incentives. Yeah. And three actually is very very minimal.
00:29:17.279 5% of the reason why people resist to use AI or like the AI adoption so low is
00:29:22.480 people just simply do not have the time. But I always say if things are good,
00:29:28.320 they will find the time. I have so many friends who have kids, they're just addicted to Tik Tok, addicted to video
00:29:34.720 game because it's so fun. It gives them a lot of satisfaction. If the AI tool is
00:29:39.840 good enough, like people will find time to to to use it. So you can see like all
00:29:45.919 the three reasons. First lack of proper education of AI and specific tools of on boarding. Second is lack the right
00:29:52.720 incentive and incentivize people to use AI that force people to use AI. And
00:29:57.840 number three is time limitation. All of them are interlin. Yeah. Because lack of training like them and the lack
00:30:04.960 incentive. They're just find the excuse like I don't have the time for the AI tool. Mhm. Mhm. So it's like do you
00:30:11.600 think though that there's a little bit of fear there? Yeah, definitely. People are like I have
00:30:16.720 no time means like this is a fair surface level for for fair like people just worried about
00:30:22.399 making mistakes. People are worried about AI you know replacing them they are losing their job. There's
00:30:28.080 another great article research published by both open air andropic. I will send
00:30:33.440 you the link. The anthropic analyzed all those prompts.
00:30:38.640 Mhm. and they mapped it to 700 occupations based on the content of the
00:30:44.880 prompt and calculated analysis analyze the ratios of automation and
00:30:52.000 augmentation. So what's the difference between automation and augmentation? Automation is like AI can do the task
00:30:59.520 from end to end. There's no need of human anymore. Yeah. And it may sounds very scary to
00:31:05.760 your like regulatory professionals. And augmentation is like okay think of AI as
00:31:10.960 your intern or your buddy like you gave you are the master. You gave humans
00:31:16.159 human humans are the master. You gave AI a task and you need to you know educate
00:31:23.279 it instructed and have like a checkpoint at different milestones. So the task
00:31:29.279 cannot be completely finished by AI. So that's the definition of augumentation
00:31:34.640 and depends on this when it comes to regulatory affair. Regulatory affair is definitely in the 700 occupations that
00:31:43.360 will be either augmented or automated by AI and what is very fascinating is like
00:31:50.480 regulatory specialist they their task can be do you want to guess the
00:31:55.519 percentage of augmentation augmentation not automation automation automation
00:32:00.960 automation 60% is less is around 20%.
00:32:06.399 Okay. Yeah, 20%. But when it comes to regulatory managers, that means
00:32:11.600 leadership, strategy and interactions with the human that cannot be replaced
00:32:17.679 by AI yet. That's ratio dropped to 15%. So 15% of what they are doing could be
00:32:26.559 automated yeah by AI. 75% of what they're doing still needs to be done by
00:32:31.760 a human. So actually it's not coming for their jobs. This is research in August. We can have
00:32:36.799 a follow on sequel. We'll have a part two in a year and see where we are. I have another question for you. It sounds like they have so much to do. It's such a big thing. I can
00:32:43.840 really see people feeling overwhelmed and overloaded by the concept of okay, where do I actually start? What can they
00:32:49.519 do to avoid AI overload? Yeah, that's a great question. We've seen AI exhaustion like AI fatigue.
00:32:58.240 What is AI fatigue? that people are just sick and tired of like hear AI AI AI like you know like you open the news and
00:33:05.679 it's like you know open AI is coming up for your job or Sam Alman said like you know 5,000 jobs will be eliminated in
00:33:13.039 one year artificial intelligence is going to replace human species was scary
00:33:18.080 news I'm very fortunate I'm trusted across like a different career stages of
00:33:24.720 people so I have you know people who are regulatory interns I I mentor so like
00:33:30.640 they they confess with me like oh my god I'm about to enter the job market and
00:33:36.320 you know the labor status in US is really gloomy like why I ever have a job
00:33:42.720 because of AI or like I hear from young professional in regulatory affairs like
00:33:48.159 oh my god Michelle I have this fear of missing out like almost formal of AI but
00:33:53.200 what does it mean for my career my boss was interested to learn but we don't even know where to start and then for
00:34:00.080 the decision makers my inbox or my LinkedIn DM is filled up with vendors
00:34:05.600 why try to sell me AI but they all start to look the same the pitches is the same
00:34:11.119 and you know like I'm just bombarded with unbelievable amount of you know
00:34:16.719 outbound sales calls yeah this is and then it's very noisy now especially when
00:34:22.719 it comes to you know regulatory professionals how to use AI tools like you know you You know there's like AI
00:34:29.760 copilot is your regulatory co-pilot or co-author. There's a lot of co- AI
00:34:34.800 stuff. How to avoid AI overload? First let's zoom into how do you select a project or
00:34:41.599 tool you want to partner with? People's time. We don't have you know unlimited
00:34:47.119 time, unlimited budget, unlimited energy to try things. Uh so there's a good
00:34:53.679 framework to decide what tests to experiment with AI and how to evaluate
00:34:58.960 AI vendors. Let's go in. Let's go. So talk to me about that. Yeah, because I've been on the other side of table. I come from pharma. I
00:35:05.280 come from the buyer side and now I'm a like a you know strategic partner. I don't like to call myself tech vendor. I
00:35:11.040 wanted to call myself a strategic partner to help the pharma and medical device company to get where they are.
00:35:16.720 I've seen the struggle on both end. So number one to avoid AI overload picking
00:35:22.560 tasks is very important. There's only two things you need to worry about. First, do you have the training set for
00:35:29.280 this particular task? And second, do you have a benchmark? We talk about benchmark from the early like at the
00:35:35.200 very beginning of our conversation. How do you have a benchmark of success? What does it mean? Take you know regulatory
00:35:42.480 strategy or labeling draft as an example. If you want to use AI to write
00:35:48.320 a submission or clinical trial protocol, do we have the right training set to
00:35:53.920 teach AI how to write a clinical trial protocol or like a regulatory submission?
00:36:00.079 We need to have the success data like okay what are the what are the
00:36:05.839 submission got approved by the FDA and also we need the failure data what are the su like submission that got rejected
00:36:12.880 yeah by the FDA. is similar with clinical trials. The failure in clinical trials is even more valuable the trials
00:36:19.680 that succeeded. So whether you want to leverage AI to do the clinical trial protocol writing or
00:36:27.359 labeling region you need to look at first do we have the right training data
00:36:32.400 for AI to complete that task. And number two is the benchmark. What is good look
00:36:39.200 like? And give me an example though. So this is quite theoretical. So take me to what does good actually look like?
00:36:44.320 So when it comes to why AI can replace software engineers, especially junior software engineers, is like there's
00:36:51.119 industry standards or what clean good code means. There's no bug. But when it
00:36:56.240 comes to regulatory submission, there's no really good standards on what good
00:37:01.920 looks like. I've taught at the different universities and we have teachers and professors teach student how to write a
00:37:08.720 510k submission, how to write uh you know a regulatory submission like IND
00:37:14.079 and some some professor like we need to be very very precise. They have run on
00:37:19.280 sentences with a lot of adjectives. Some professor like very concise short sentence to the point. So because the
00:37:26.720 lack of benchmark of what good looks like there for your industry or the
00:37:32.320 organization, it's very hard for AI to do that task for regulatory intelligent.
00:37:38.640 We want to understand the competitive landscape for surgical robotics globally. So take this task example. We
00:37:45.200 want to see if AI can help us with it. Number one, do we have the training set? Yes, we do. with the FDA, China's MPA,
00:37:53.599 Japan PMDA, they have all released information about clinical trials
00:37:58.640 carrying on globally and also you know products that medical devices that has been approved that has surgical robotics
00:38:05.680 as their definition. So we have the training set. Do we have the benchmark of good? We want to get the
00:38:12.640 regulatory intelligence timely on a weekly basis or even on a daily basis. What good looks like is we are sure like
00:38:19.520 you know all the key market like US, Canada, Japan and Europe all the key
00:38:24.720 markets are covered. Excellent. And number three is we can verify we can go to the source to verify that all the
00:38:30.800 data is true. We can look at the original approval documents. So in this particular case it's a winning case
00:38:36.720 because we have the training data set. We have a good benchmark. So just one thing I want to clarify when
00:38:41.760 you say do we have the data set you're talking what if I understand well is like does the data exist? It doesn't
00:38:46.880 necessarily mean that the company has the data but it's like is the data out there. So the data might be publicly
00:38:52.960 available like a vendor can give you access to that data. So it's not just does my company have the data. Does the data
00:38:58.320 exist for us to be able to do that? And then the second thing you said was I've lost my train of thought. A benchmark of
00:39:03.920 what good looks like? Okay. So then the data does it exist? Yes. Do we benchmark very clearly in terms of
00:39:09.920 what good looks like for it to produce? Yes. So then how do you measure ROI? Yeah. Yes. Yeah. So how do you take that next step
00:39:16.079 like what's the time compared with human? This exercise has already been performed by human experts. What's the
00:39:22.960 time and you know efforts take by humans and versus you know you just like you
00:39:28.160 know do a quick search platform and then get the automated reports. So let's say a company says right I'm going to use
00:39:34.320 this example that we've just done. So we're going to move forward with a pilot case a test case. We're going to experiment use this for regulatory
00:39:40.320 intelligence. The data exists check. What good looks like check. So, we're going to make this investment. How do
00:39:45.440 they then measure? Let's say they want they're going to feed back to the CEO. Yeah. On ROI of this pilot case. How would
00:39:51.359 they measure the ROI of doing this pilot case? That's a great question and very simple. You could be a regulatory VP or like at
00:39:58.880 least like a advisor for them. Would they make like AI AI adoptions? You
00:40:04.400 could open up another service like AI strategy consulting. Yeah. Yeah. This is very good. So like if I'm a VP of
00:40:11.280 regulatory affair I pilot with this tool and then I want to present to other sectors at the ROI this
00:40:18.000 is two part first like you know revenue gains the top line impact on topline and
00:40:23.119 second and impact on the bottom line and now I add the third the emotional element how people feel more confident
00:40:31.040 more empowered like they're make the the organization actually care to make
00:40:36.880 regulatory professionals like regulatory team life easier rather than harder that
00:40:42.000 is very critical. So revenue generation we have seen companies by leveraging regulatory intelligence to find new
00:40:48.320 opportunity at new market that's a really clear use case right? Yeah. Yeah. And even they find new products when they look at the
00:40:54.640 FDA mold database and adverse events they like oh with this like a surgical
00:40:59.760 robotic they they the surgery is very successful but it may have some like you
00:41:05.520 know side effect as thermal burn because when the surgery robots cut patients
00:41:11.520 they may be too hot. that this could also be an opportunity they can develop another line of product to cure with
00:41:18.160 thermal bone burn because they already have the surgical robotic on the market. So they not only they find u new
00:41:26.160 opportunity also find new product ideas. Yeah, this is like revenue generation part on the efficiencies. We have seen a
00:41:33.680 lot of companies like take 20 minutes 20 minutes of test like they gave the the
00:41:40.000 user like like a therapeutic area or a product for them to research for 20
00:41:45.280 minutes and it need to be time to see like okay what they get from like a manual or traditional or classic way to
00:41:52.720 going on Google going to the website or going to everything and then like they they show them the tool teach them how
00:41:59.200 to write the prompt or how to do a search and boom in 20 minutes what do they get and do a comparison like what
00:42:05.599 they get on their own AI didn't capture and also what AI can capture that they
00:42:11.359 missed so it's a very clear you know comparison apple to apple comparison so you need to have the baseline before
00:42:17.040 you start where are we coming from and then you're looking either for impact on topline or impact on bottom
00:42:23.359 line a lot of the time especially in regulatory affairs there is very strong efficiency gain so that's really
00:42:28.480 interesting and the third element is we didn't pitch to the VP of regulatory affair or chief
00:42:34.560 regulatory officer early on but we start to find out through our like engagement
00:42:39.680 with the users they are actually very happy they're like oh my god we have
00:42:44.800 reps so many like I no longer call them my customer and user I call them my friends like because I really care I'm
00:42:50.800 really happy they're winning they said oh my god I'm just so happy they sent me one hour of time I feel
00:42:57.440 like I'm regaining my life I'm a busy mom I have like you know five you know
00:43:02.960 PMAs or like one clinical trials I have to work on but now like with the AI
00:43:08.240 tools I can walk my dog I can spend time with my children I don't need to do the
00:43:13.440 busy work yeah and I think that it does have that impact on retention as an employer right I do believe we're going
00:43:19.359 to be at a point where in the same way that companies will evaluate candidate skill sets before they hire someone of like how are you
00:43:26.400 using AI right and it will very soon become a differentiator because if you are if everyone else is and you're the
00:43:32.319 one that's not, that will also impact your prospects career-wise. But I do believe like for companies to really
00:43:37.520 attract like really good people, those candidates that are used to using AI, different tools and stuff like that,
00:43:43.440 when they're joining a company, they will be asking at interview like what does your tech stack look like? Yeah. What are the tools that you offer to
00:43:50.400 your teams in order to empower them to be better, more efficient, so they can really focus on the human side of the
00:43:55.839 work? Yeah. And then as a hiring manager, if you're basically turning around and saying nothing, like
00:44:00.960 Microsoft and email and that's it, that's going to impact whether people are gonna want to join your team or not.
00:44:06.480 I think in particular, the companies that we're seeing win right now are the smaller companies, not so much the big
00:44:11.760 companies because the big companies are these massive heavy machines, a lot of red tape. They're not really, you know,
00:44:17.680 like it's really difficult to move things in these big companies. Smaller companies can take faster decisions of, okay, we're going to test that, we're
00:44:23.440 going to give it a go. If we like it, we're going to go with it. if not we're going to kill it. And so actually people that are working in like these smaller
00:44:28.640 organizations they're having more exposure to more interesting tech stacks to experiment experimenting more with AI
00:44:35.280 and so you know when evaluating career prospects if they're looking at large organizations and they're saying what
00:44:41.119 are you guys doing and the answer is nothing because we've been stalling because we've got red tape because we're trying to build our own thing and it's
00:44:46.800 taken like two years to build something and actually then we built it and it doesn't really work so now we're changing our strategy. That's going to
00:44:52.160 make you as an employer less attractive. So you're going to get less good talent, right? And all the good people are going
00:44:58.240 to go elsewhere. So it's not just like topline ROI and efficiency gains, but it's also employer branding, retention,
00:45:05.280 all of that as well to really consider. Absolutely. I think that for like we've sat down. sit down with all the top 10
00:45:12.800 medical device company executives from VP above like 86% of them told me like
00:45:19.520 you know you know we we are really not using AI and we are not using the AI
00:45:27.280 tools correctly and it's like oh I have my copilot myself the last time I open
00:45:32.400 is like you know when the IT install me I make sure I can log in and that's it and what we have seen like a fascinating
00:45:38.960 trend is where like you know the the midcap market
00:45:44.000 like they they want to they are the rising star. They want to be become you know the heavy hitters like the metronic
00:45:51.119 of the world and also they have a slightly smaller team. We talk of four
00:45:56.480 seasons of change. People have to have the pain to change who are willing to change and learn and they they they
00:46:03.119 need to they don't have that resources. don't have an army of people doing regulatory like like intelligence
00:46:09.440 globally. They don't have an army of people just doing like you know regulatory publishing check that's
00:46:14.960 manual process. So they they have to innovate they have to adapt. So they almost like have this
00:46:21.359 AI employee as their buddies to the human like experts that pair with their
00:46:27.280 experience. We have seen a lot of ws surprisingly from the midcap market. Mhm.
00:46:32.400 So if you're in a small company or a mid-size company, actually you have a competitive advantage to
00:46:38.000 actually get started and leverage this. So start. It's the feedback, right? Don't wait. Just move forward.
00:46:44.079 Where do you see this going? I want to have a disclaimer. My I can only see three months, six months or
00:46:50.319 just three months ahead because the industry is changing so quickly. AI is
00:46:56.000 changing so quickly. What I see is that the future will have more and more AI
00:47:02.480 employees besides human employees because the regulatory VPs they are not going to have the huge budget and they
00:47:09.760 are also thinking strategically now like okay instead of previously they want
00:47:15.040 army of people that they can manage and lead the the how big the head they have
00:47:21.280 is equal to how big their influence and power within the organization is like this is going to change completely. We
00:47:28.319 will have nimbo but mighty regulatory teams within the organization. What I
00:47:33.680 see in the future is AI agent will have a lot of potential in regulatory affairs
00:47:40.079 both consulting firms or manufacturers. people will be more comfortable instead
00:47:45.760 of you know there's two elements like upskill people to use AI to motivate
00:47:51.040 them or scare them a little bit that they need to change that is going to happen gradually but where we see the
00:47:57.280 huge uptake is AI agents have a AI agent do your competitor
00:48:03.680 analysis warning call alert or periodically look at change of SOP
00:48:09.839 versus global guidance that has a huge huge uptake like people are especially
00:48:15.119 the leadership team are very happy to adopt the AI agent as their in their workforce the team is happy because they
00:48:22.400 are fed up with the busy work a lot of people say especially in the mid level oh my god I used to be so good at
00:48:28.880 searching on FDA I can memorize the guidance back to back and now AI is
00:48:34.319 taking all that away don't worry it's also open up a new like skill and new
00:48:40.480 space and possibility ility for you to to you know upskill and learn something
00:48:45.920 new. It's almost like if people are obsessed with tedious manual tasks in
00:48:52.319 regulatory affairs is almost like they are trapped in a toxic relationship. they're so busy like get by that they
00:48:58.880 they haven't you know lift their head up and see like oh what's the option out there and by AI augmenting and
00:49:06.480 automating some of the tedious works is really gave the regulatory affairs a new
00:49:12.480 fresh air I think the goal like the scope of regulatory affairs is going to evolve very quickly previously they were
00:49:19.200 known as the no person to say no to everything and now they are seen as
00:49:24.400 beyond operational they have some revenue element in the future whether they will be a core strategic center
00:49:30.960 then the corporate strategies like here's the therapeutic areas we need to go here's the country and indication we
00:49:37.280 need to prioritize regulatory input is going to be so important so beneficial
00:49:43.200 to help companies make sure that they have the best-in-class or first in-class follow that they can really win the
00:49:48.640 market shareh one piece of actionable advice that people can go and do right now to
00:49:54.319 get started Yeah, I debate this a lot. There's no one silver bullet for it. But at the
00:50:01.119 same time, I for whether for former like FDA hot shot because we have like a 20
00:50:09.119 customer or friends that are former FDA officials using our platform to interns
00:50:15.040 have no real life experience of regulatory yet. I think one piece of advice is start experiment today. Start
00:50:21.599 small. Start with regulatory intelligence and experience how AI can
00:50:27.359 take away the tedious work, the computing power, speed and also the mistake they will make and then take it
00:50:34.000 from there. Start experimenting today. Start experimenting small and constantly
00:50:39.680 change from it. It doesn't need to be like a months of strategic visioning or
00:50:45.200 a road map. every day five minutes or every week five minutes is going to make a huge difference
00:50:50.640 because it's building the muscle ultimately, right? So actually even like putting some time on your actually dedicating time.
00:50:56.240 Yeah. Yeah. Have a block your calendar like you five minutes for AI. Yeah, that's a really good idea. So
00:51:01.599 final question that I love to ask everybody on the podcast. What is the legacy that you want to leave on the world? Wow. Yeah. I feel like it's such a
00:51:08.720 privilege to build Nyquest. the universe or the magic beyond my comprehension has
00:51:14.880 taken me on this wild journey. I feel so privileged to talk and exchange ideas
00:51:20.960 have this strategy discussion as a leader like you. I still feel so privileged. I have a front seat of AI
00:51:27.359 changes in Silicon Valley and all the dramas and all the w and all the tears and I feel so privileged to witness the
00:51:35.599 ws of our like customer now friends who get their promotion get the breakthrough
00:51:41.680 designation. The legacy I want to leave behind is when people say, "Oh, I have this breakthrough designation thanks to
00:51:48.240 Nyquest." Or like, "Wow, I really get my dream job that thanks to like I I we
00:51:54.079 have a education program that that people really know how to use AI." My goal is to democratize the tribal
00:52:00.800 knowledge to really power innovation. I love that. Michelle, it's been a pleasure. Thank you. Cheers. Cheers.
00:52:06.720 Cheers. Cheers. Cheers, everyone.