Session Notes: AI in Action: Where Pharma Portfolios Are Really Seeing Results
Brigitta Voss
Senior Consultant & Lecturer - Program & Portfolio Management, Ariceum
Fabienne Shehad-Zuend
OCM Change Capability Lead, Roche
Michael Collins
Global Head, Research and Development Project Management, CSL Behring
Prerna Maheshwari
Associate Director – Lead QC-PL Integrated biologics, Lonza
Roberta Sassu
Portfolio Reporting Director, Program Strategy & Planning, Novartis
Executive Summary
A panel of pharma portfolio management experts discussed practical AI implementation experiences, emphasizing that data quality and human governance remain critical success factors. Organizations are seeing immediate value in decision support and predictive analytics, but successful adoption requires balancing top-down strategy with bottom-up innovation while maintaining strict human oversight for regulatory compliance.
Full Notes
Change Management as the AI Adoption Bottleneck
The discussion opened with a fundamental challenge: getting people to work differently with AI tools. Fabienne Shehad-Zuend from OCM highlighted two distinct organizational approaches - the experimental 'let's see what evolves' camp versus the intentional change management approach. The key insight emerged that success requires addressing personal benefits, not just organizational ROI. As one speaker noted, 'I want to know what is in it for me. How does it change my daily life?' Organizations are finding success by starting with pain points, particularly administrative tasks, allowing people to focus on more strategic work. The consensus was clear: AI fluency will become as essential as English proficiency within 2-3 years, making personal skill development urgent.
Data Quality as the Foundation
Multiple speakers emphasized the 'garbage in, garbage out' principle as fundamental to AI success. Roberta Sassu shared that data platform quality improvement was the first major AI application in portfolio management, noting how AI helps identify missing data in massive portfolio reports. Organizations are investing heavily in data cleaning algorithms and predictive tools that forecast milestone achievement. However, speakers warned against blind trust in AI outputs. Prerna Maheshwari stressed that even with improved data processing, validation remains critical: 'We work in a controlled environment. We cannot just push a button and take decisions. It's still hybrid work - you need to validate what's coming from the tool.'
Portfolio Decision Support Delivering Immediate Value
The panel shared concrete examples of AI impact in portfolio governance. Michael Collins described transforming overwhelming 120-slide portfolio review decks into focused decision frameworks, with AI identifying critical strategic decisions and presenting actionable options. Predictive analytics are helping teams forecast timeline risks and adjust plans proactively. However, speakers noted that results depend heavily on baseline measurement capabilities - many organizations lack historical metrics to quantify AI improvements, making success dependent on user-reported benefits. The technology is proving most valuable in filtering massive data sets and removing subjective bias, though human judgment remains essential for final decisions.
Governance and Organizational Structure
Regulatory compliance drives strict AI governance requirements, with speakers citing recent FDA observations about AI overuse without human oversight. Organizations are implementing 'human-in-the-loop' processes where AI-generated content must be reviewed and validated by humans before release. The governance question sparked debate about centralized versus decentralized approaches, particularly challenging for organizations growing through mergers and acquisitions. Most successful implementations balance top-down strategy from Chief AI Officers with bottom-up innovation through pilots. Prerna emphasized accountability: 'The human is the key, the owner is accountable. When something goes wrong, it is still you.' Organizations are establishing shared AI tool registries to prevent silos while maintaining innovation momentum.
Key Decisions
- ✓ Human-in-the-loop validation required for all AI-generated content
- ✓ Data quality improvement must precede AI implementation
- ✓ Centralized tool registry needed to track AI pilots across organization
Action Items
- → Organizations implementing AI — Develop shared registry of AI tools being tested and piloted across organization open
Key Insights (14)
Data quality remains the critical foundation
Roberta Sassu Human-in-the-loop governance is non-negotiable
Roberta Sassu Portfolio decision support showing immediate value
Roberta Sassu Change management determines AI adoption success
Roberta Sassu Predictive timeline algorithms gaining traction
Roberta Sassu AI fluency becoming core competency
Roberta Sassu Bottom-up innovation requires top-down strategy
Roberta Sassu Bias awareness critical despite AI objectivity
Roberta Sassu Establish shared AI tool registry
Roberta Sassu AI as servant, not master
Roberta Sassu Human accountability remains
Prerna Maheshwari Critical evaluation required
Fabienne Shehad-Zuend FDA AI governance observations
Roberta Sassu AI adoption measurement KPIs
Roberta Sassu Full Transcript (click to expand)
Apr 23, 2026 AI in Action: Where Pharma Portfolios Are Really Seeing Results - Transcript 00:00:00 : crowd towards the end of the room at the tables while the tables here in front seem still a bit empty terms of the networking and talking to new people. People want to come a bit in front and change to another table. No one is biting. Just come on. It looks a bit empty here in the front which is not so nice for all the speakers. Two brain people. Someone else here still empty and here still empty. There's no price but but it's no harm. We are not biting either. Good. So, so you just asked a lot because we are actually creatures of habits. Yes. So, if if I let you around, I'm asking you how many went back to the same table like yesterday. You're um you're trashing my discussion for after lunch. So, um Martina Martina will move to a table with to the front lost the process talking to new people networking yeah so AI and action where farmer portfolios are really seeing results and maybe we start right away with Faben from OCM. 00:01:55 : So what does it take to introduce OCM and pharma and get people to work in a new way? And by the sorry by the way questions if you have questions from the audience just raise your hand and then we include you. I mean it's a very good question. I mean I we have this topic yesterday during the round table as well. Of course I work in change management. I I believe to to work on the human side and yet we still see that many companies do not invest enough into it and we had the question yesterday why is that so? I mean I cannot give you an an answer to it but I like what some people said. They said if you have leadership who believes into the people who is already training of going for that and building up capabilities. You will also have companies who will I guess I take there's two camps. I believe in this is the first camp. We just believe this put it in. Let's experiment. Let's see it evolve. 00:03:08 : Let's see what happens. Your processes, people's ways of working will adapt. I personally I love that sort of way. Let's see what involved following with Darwinism which results that are valid camp and that goes back to that change management. Let's make intentional change. Let's understand when people are working today what would be in it for them to work and adopt these AI tools. It's not just bolting a tool on top and seeing how it happens. It's intentionally putting in place with benefit you know. So I've seen both working you know um it depends on the company their culture. Yeah. And to build on that actually um what I seen in portfolio management and the use of AI is is very important that we really start to embed this in our day-to-day work. Uh we have to pilot uh small pieces and have a minimum product feedback circle of feedback and and one of the next steps is how we build trust uh comfort with uh all all the stakeholders. uh but yeah that's at least part of the culture. 00:04:17 : Let me add a bit uh here what I struggle in the beginning to break the barrier to to ask people or to convince that can we use this tool is when we show that there's a clear business case towards it right and and when we show that there is a clear ROI there is a clear uh benefit for the organization right so it it brings the first barrier and then with the pilot as you mentioned we can progress it and show what we are getting out of it what we vision for so this really helps Maybe I I want to add on that. And we very often we talk about how it benefits the organization. But honestly me as Fian I want to know what is in it for me. How does it change my daily life? I mean I can understand how it benefits overall the organization. I'm a team player. I can go along but at the end it changes maybe my ways of working and how does it change and why should I care. 00:05:19 : So it's very important to really highlight it on a personal level not just on an organizational level. And to that I would say also that um it's it's it's also important to start with understanding the pain points and maybe starting to show introduce AI you know what we can solve your pain point of these administrative task where you can focus on more strategic work and just show the example and start using and this is nice thing about doing pilots because you can convince people right away especially these more innovation antibodies like oh it never works oh I don't get what I want and then you come in some small pilot and people think oh okay it can work well um it's a point I made yesterday and I just guess for me it's important to reiterate this we're all professionals in project portfolio management processes you know this is what we come from and then when we look at the tools that we're working with maybe some of us are more are less professional but when it comes into this AI domain this is an area that's for a lot of us so I think as well as part of that adoption it's up to the companies it's also starts with us actually familiarizing yourself getting the practice getting trained curious about it also the companies I think we need to lower that adoption bar because I think 00:06:45 : it's naturally with me if I'm asked to do something for the first time and I've never had exposure there's going to be that risk aver why do I need to change. So never forget this coming away. You have to start being curious, start having to develop your expertise side as well and companies have got to take an attention approach onto it as well. So maybe moving away from the change management part because now it has been more on how can we move people. So where have you already seen difference in portfolio management, portfolio decisions driven by AI? For me um one piece is very important comparing to years ago is also one piece that is very important for I have data you can platform that quality so the very first thing that you can use AI is just start with this data it's basic one and uh it was really really powerful for me because my first question when I have the reporting h I'm missing some data here because it's a huge amount of data for all the portfolios and I give it to the exact take a decision based on this. 00:07:56 : I'm not too sure that the is so AI I think is here that's where is adding value uh um helping to clean the data but also another tool a lot um is to look at these algorithm that help to predict uh for example um we are looking at timelines and next milestones we have tools that helps predict okay are how we eat next milestone and talk on earlier or later So how can we adjust the plan or take the risk so that we are on track or whatever decision we need to take. Of course we take this data knowing that you may need some still some gaps behind. So it just flow to think about so just a support we are the pilot and then this is the assistance then here where I think very valuable for for all your supporting decision making. A do you have an do I say your name correctly? Okay. Do you have an example where you're already using AI and portfolio management pilot and we went to yeah uh and then we we realized some of the things were not giving a satellite and that's just what you said because the data wasn't really accurate enough. 00:09:21 : So data is really really important because if we put garbage in garbage out it is simple because if we if we don't get the right data uh it's it's of no use. So we need to really have the clean and the quality data that we need so that our decision making is really on the right set of data and later we fixed it quite a lot we are not there yet and this is what we have like overall portfolio management s*** in the system you get s*** out I would say if you want to start with a I was stuck with that. A really practical example that I share with people that if you've ever been to portfolio reviews, government's meetings, you get a deck of a Friday meetings up and you've got 120 slides, all these memorandum and everything just too giving way too much information. Going back to what Tim was saying this morning, if you've codified, what are the strategic decisions that need to be made? So, you need to have that codification. So you can already instead of reading through everything to way too much detail just saying what are the important decisions that need to be made coming into this meeting. 00:10:27 : Okay based on that information so you can already have good plans. It can already give you the information back it can also give you the options. So that's where that we've been using it in my previous company just for simple portfolio governance reviews and you know the project planning I just gave the example yesterday you can actually do for due diligences using a re case back with charters coming back with um you know into risk management and problem knows you're not just predicting what needs to be done it's easy to predict I think the great tools of today that we're seeing tomorrow would prognose this is what you need to do here are the three choices that you don't know which one you want to do. Okay, one question over there. We need a mic. Yep. And thank you to the panel for everything um that's been shared in terms of some practical examples of the results. I was curious um when we're talking when you're describing these results is a little bit more on the qualitative aspect. 00:11:37 : So I was just wondering in terms of more quantitative results of incorporating AI in your experiences is this easy is this something that is being tracked? are you able to measure and a little bit of context to that is in in my case in our experience um some of the data points that are being solved with AI we didn't really have a lot of metrics like hard quantitative metrics of the baseline status so then we know that there is some improvement but the improvement that can be measured is very much dependent on users reporting this improvement and then you know the seniority and experience of the users of impacts that. So it's very hard to establish like average and reliable data. So we usually measure um through a project success rates we have we have key KPIs that determine the success of a projects then we have customer satisfactions uh how many deliverables were on time rights how many risks were identified before proactively and the risk mitigation so yeah we we have lot of KPIs to measure the success and often we do have a baselines for these already before implementation. 00:12:57 : So we can easily track uh if we are on the right track or not with this implementation. Yeah. I have a question to the organization because I didn't mention okay I want to that use but I wonder if you have your decisions that instead of saying I want that they are looking at all tools saying also that okay actually I think this is on the market and this we can try to see two three weeks something fast like you have really setting up this kind of organization So it's more siloed that everyone is playing with different apps and then come on someone comes to okay so something need that's main question how how is it organized so we we we share all the tools that are tested so you can have access see what's going on what is on pilot is on production what is is being abandoned is not working um and uh if you want to bring uh uh an idea then they have have a business case right so we want to do it not just randomly we want to there's a lot to test a lot to do if we just score random only everyone on our side then we will lost each other so this is our business case but value will be test is the pilots and and if it's moving forward and I think also we trust 00:14:29 : because everyone can see what's going on uh everyone can look called data control we centralize here what we are doing and yeah my organization AI is something very important and I think this in industry in general I think in the future uh I if you look in the two three years from now I think that being fluent in I would be equivalent to being fluent in English so they will be your line replacement maybe even in a CV and uh tools like Excel PowerPoint will be still uh but more accessory. Maybe I want to add this one. I mean in the organization I am in we have now this very much the AI hype. We also had like everyday AI training for six weeks where we were really encouraged to to look into that. But what I also feel that it's like they want us really to use it. And then for me it's sometimes yes I believe in the AI it's a good tool but we also have to start to be critical does it really give us what we are asking for does it really promises and give us the outcome we need does it make us more efficient and I think there we really hopefully we need to have organizations where we can also say yes we're looking into the tools but the tools we found so far is not giving us what we want. 00:15:55 : So let's rather wait and see and not just jump on the first train we need and we are back to change management and pilots again question from your side. Yeah, it's just a similar topic. It's a question for everyone on strategy for AI. Who's driving it in your respective companies? Is it from senior management down or is it the people on the ground defining it? And also, how robust is it? Is it very I mean Fabian, you mentioned everyone's playing at the moment, but is there a clear road map for what we want and what we expect this to do or is it is it still a great playground? So um what I have experienced and from a business strategy of should we look like and it is already kind of implemented inization it starts with senior management we need to have a chief AI officer who build the right governance who brings u the strategy who brings the governance what exactly we are looking for so it really starts there and it brings to the experts domain experts IT experts and bringing all of them together to build the strategy and then implementation. 00:17:06 : And here I would slightly disagree because companies especially large companies also try to encourage innovation from bottom up and here pilots can be done by individual groups and then push it up if it's successfully. So I would as a next step then for the big roll out yes there would be a big strategy needed but this innovation could and should come also from the top. I I think it needs to be balanced from a top down that you have a support that your company support the strategy that is very well needed right and then it goes at bottom up for the innovation ideas to bridge the gaps that we see first of all my organization is is we are we are curious and we are encouraged to to bring ideas but it's very we have that we need to have the support from the leaders going also from the top down it's on all directions and also uh of course all good the message AI is here to support us regarding GX the environment. So we obser need to be responsible. 00:18:16 : So also this input for the leadership the guidance is important. So is from the top is important without the support of the leadership we cannot move forward but we are encouraged to bring it ideas but yeah innovation. Okay question. Yes m um at the end of the day portfolio decisions are made by people that will never change. Um in fact even the CEO will not make the portfolio decision. It's usually made by you know representatives across all the functions the board etc. Um, regarding uh in in that respect, you know, those people need to have a a clear uh recommendation, clear insights based uh on very robust analytical um you know calculations on on trustworthy data transparency and traceable traceability and all that. Where do you think AI has helped in that process to date if at all and where do you think it could help in that um aiding in actual decision making based on trust? Well, when we use AI all is done in a controlled environ. So when we are bringing the data for decision making um we know that is the work has been done right to to check and and go and and look at the data that is what what we are providing is is trust trust we we cannot just push a button and okay this is this information and take decision. 00:19:56 : is also still a hybrid work you need to do to validate what is coming from uh the tool I think we are still on this stage and maybe I think in two three years from now when we evolve is an evolving process we are all learning so the first process we hand together with the AI environment uh so is an environment we control we test and we sure that maybe in two three years from now more confident on just using AI but still having a look at at the data and in my perspective um bringing AI should not change the perspective of before as we working in the previous years and having the AI would bring that more and more transparency right and the trust right now with all our integrity we work we work with a full transparent data Right. It might take some time to pull all the data together to to get the right uh picture or to create more transparency but intentionally we don't um go with um with the hiding data or something right I mean we always try our best to bring all the data that is there and brings the right decisions on the table but what I see with AI it's an acceleration it's a speed by which we can put together the data and get to the right decision. 00:21:22 : So in this way our decision will be tuned further on the ref the fine data on the quality of data that we can make decisions because yes in exporting data or in bringing this together may make mistakes but intentionally from a transparency perspective and ethics perspective I see um it should work in both ways just action the last one I would add to this go for me um we remove the tendency for bias That's why go back to AI boost doesn't remove all bias and of course I know you say then there's bias in the data that we can go back and say and that's where still you need to control that you're not having a biased data set but it removes that huge amount of level of subjectivity that you get an objective still at the end as you're saying the human still receives us at the end and that's where the bias comes back in but at least you're getting this really what I would say independent nonbias we assume assuming that your data can never assume is not biased as well. And here I would slightly contradict because um I think what's extremely important is that we are aware AI will also have biases based on how it's trained and how the data was looked at and decisions were made before it. 00:22:47 : So it's always important to look not only on the decisions uh recommendations of AI but also ask for and data they are using where I see the huge advantage of AI is definitely the amount of data we are collecting is exponential to compare it to 20 years ago we made decisions on tiny fraction of data and now what's around every day collected. It's not feasible to absorb or calculate with humans anymore. So AI is definitely a help in organizing the information that's around no I mean just if you're want to eliminate okay there is going to be better at filtering you can be better at seeing what data truly matters and that goes back to data science machine learning what data is truly important to drive a decision so you've got access to a lot more data that before we couldn't But now we got the data. Now we got I say the data scientists, the technology to tell me this is what is important for these decisions. And the noise was there beforehand as well. You never knew exactly if the data you were using is exactly accurate to what you want. 00:24:14 : You just had less data. If you could say the noise was even larger because the amount of data was less. to you want to say something and then we come to you. One one last thing also consider that before we were probably blind that there's ton of ... [transcript truncated]