Introduction to the Stacked Podcast
00:00:02
Speaker
Hello and welcome to the Stacked Podcast, brought to you by Cognify, the recruitment partner for modern data teams. Hosted by me, Harry Gollup.
00:00:13
Speaker
Stacked with incredible content from the most influential and successful data teams, interviewing industry experts who share their invaluable journeys, groundbreaking projects, and most importantly, their key learnings.
00:00:25
Speaker
So get ready to join us as we uncover the dynamic world of modern data.
Exploring A/B Experimentation at Scale
00:00:34
Speaker
Hello everyone, welcome back to another episode of the Stacked Data podcast. Today, we're diving deep into ABE experimentation at scale, how companies can leverage rigorous testing to drive product decision, improve customer experience, and create real business impact.
Dimitri on Flow's Data-Driven Culture
00:00:54
Speaker
Joining me today is Dimitri, VP of analytics at Flow, one of the world's leading health and wellness apps. Flow's built a data-driven culture that embraces continuous learning and innovation, running over 1,500 experiments to refine and enhance their product.
00:01:12
Speaker
In this conversation, we're going to explore the fundamentals of experimentation, the journey of scaling from early stage testing cycle to full-fledged experimentation frameworks and the challenges that comes with running hundreds of tests at once.
00:01:29
Speaker
We'll also discuss how Flow's balanced speed and and quality in its experimentation approach. And and here's some real world examples of how data-driven insights have really shaped their unique products.
00:01:41
Speaker
So if you're thinking about building or or scaling experimentation in your own company, this is definitely an episode not to miss. Excited to get into it. And yeah, welcome to the show, Dimitris. A pleasure to have you on How are you doing?
00:01:53
Speaker
Thank you, Harry. Very good. All good. Thanks for inviting me. Happy to be here. Actually, I think this is the first podcast for me. So yeah, excited. Brilliant. Well, ah yeah, honored that you're this is the first one and excited to to get into it. I've loved learning about how you guys have scaled experimentation to a level I haven't really seen ah at any other organization, maybe other than your metas of the world.
00:02:21
Speaker
But yeah, I suppose it would be great for the audience to to give a bit of context to yourself and your career and your current role at Flow.
Dimitri's Journey in Analytics
00:02:28
Speaker
Yeah, sure. So let me start from myself.
00:02:32
Speaker
I'm Dimitri. I'm originally from Belarus and recently moved to London. I've worked at Flow for... more than six years now and started as, let's say, founding analyst there.
00:02:48
Speaker
And now I'm VP of for analytics with a 30 person team of brilliant people responsible for product marketing and finance analysis. And if we're talking about Floor, perhaps not everyone knows about Floor.
00:03:05
Speaker
So basically it's number but one women's health app. Initially started as period tracker, but now it's more like a platform that combines period tracking, like other modes, trying to conceive pregnancy modes with personalized health insights.
00:03:24
Speaker
And yeah, having, you know, around 75 million users globally, it's our monthly active audience. So we are pretty huge. Amazing. Yeah. I mean, the scale, I think is what's going to be most impressive about this episode, Dmitry. So let's jump into it and let's start with the basics.
Understanding A/B Testing and Its Benefits
00:03:43
Speaker
yeah can Can you explain AV experimentation is in its simplest terms and why it's so valuable for data-driven sort of product companies?
00:03:53
Speaker
Yeah, sure. So A-B testing or experimentation is basically a way of testing product or marketing changes when we don't roll out a specific change for everyone once it's ready, let's say, but we preserve a control group, usually one control group that doesn't receive this change, this feature for some time.
00:04:18
Speaker
And then you measure the difference in target metrics between these groups, basically usually test group and control group. In a nutshell, it's, it sounds very simple.
00:04:28
Speaker
I can explain why it's valuable actually. Two main things I can mention here. I suppose there are, there are many more that can be mentioned, but I would focus on only two.
00:04:41
Speaker
First is velocity because experimentation basically helps you to parallelize product changes, to chose only those changes that really impact user behavior and other business goals positively.
00:04:59
Speaker
And basically running experiments gives you ability to scale your growth tenfold. Recently there was a post by our CEO Dmitry Gursky and we are running around 200 experiments simultaneously during a day.
00:05:16
Speaker
So it speed ups us a lot. And actually I would focus here that we embrace parallel testing a lot because without parallel testing, it doesn't really help to speed up things.
00:05:32
Speaker
So I know there are some people that are against parallel testing, but we embrace the pros of it. And the second valuable thing in experimentation is accuracy, accuracy decision-making that you observe and gives you the most precise accuracy in estimating the impact of, you know, some specific features on user behavior.
00:06:01
Speaker
So, yep, shortly, this is it. We can go deeper into each. So I suppose it's in a nutshell, it's the ability to compare two different features to see which one's going to have more impact on business, on the product, on the customer.
00:06:18
Speaker
It enables you to make data informed decisions on on what product direction, what feature change. How did it all start for Flow? You've obviously scaled to to running 200 tests simultaneously, but you mentioned you were one of the founding analysts.
00:06:32
Speaker
How did it start and where were the key motivations for having experimentation embedded within ah the product's lifecycle?
Flow's Experimentation Initiation in 2019
00:06:40
Speaker
Initially, we started in 2019, I think, and we already had massive audience, maybe around 20 million active users.
00:06:51
Speaker
And we started with some small experiments with content, with insights that we serve for our users. As far as I recall, it was some experiment experiments with our health assistant feature.
00:07:05
Speaker
It's basically was, it was some dialogues with users about health conditions or some, you know, feeling some insights. And we had some tests there, first tests.
00:07:19
Speaker
And experimentation actually helped us again to run things in parallel, ah basically testing more than one hypothesis at the same time.
00:07:31
Speaker
And we try to increase the number of people that finish a specific dialogue, virtual dialogue. And I think general experimentation was the the only, maybe not the only, but the best way here, ah because the overall impact on very high level metrics was rather low.
00:07:54
Speaker
Because not many users at that point were like reacting on these dialogues. But also like maybe even more importantly that we tried again a few different variations for specific messages and call to actions.
00:08:13
Speaker
choosing kind of the best that fits our goals and user goals. And basically the idea was to maximize the number of people that finishes dialogues and get real recommendation of how to, how to move forward with the issue.
00:08:29
Speaker
Then I think we also started to experiment ah with some small experiment with payment screens as well. The idea was to ensure that everyone understands what they're paying for and the information about subscription is clear.
00:08:46
Speaker
For example, sometimes users understand how the trial period works, but sometimes not, and we don't want to have many refunds after payments and that's why we decided to make it more clear and optimize the number of refunds.
00:09:05
Speaker
And yeah, it was pretty good in terms of overall reaction from the team. I think the team was impressed how quickly and clearly we can provide specific recommendation, what to roll out and how to choose a winner.
00:09:23
Speaker
So I think that was kind of the starting point of the future scale. Brilliant. so I suppose it sounds like you had some success and some wins off of some clearly monetizing areas of the product through experimentation, which I suppose has led to this increased sort of scaling heading journey that youve you've been on within the experimentation space.
00:09:47
Speaker
Now, I think the technical aspects of experimentation, which we will come to a bit later, but I think first and foremost is the the cultural ah aspects and ah getting that transformation in and and that ah experimentation culture, not just within your data team, but within your your stakeholders and having that from the center of what they're doing. So how did you approach sort of I suppose, getting buy in from different teams and stakeholders and building and enabling an experimentation culture
Building an Experimentation Culture
00:10:22
Speaker
at Flow. What are your strategies in that space?
00:10:24
Speaker
Yeah, very good question. First, look, setting and like setting experimentation culture is continuous process. You cannot set something once and use it forever.
00:10:38
Speaker
You need to improve it. And we always see some points how we can make it better. And it's a journey. Everyone needs to be prepared for that because otherwise, you know, it's a question of expectations.
00:10:54
Speaker
So i think culture is a good question because i think we started from education. were continuously investing and are investing significant amount of time explaining key concept of experimentation.
00:11:11
Speaker
Currently it's more focused on new team members, right? When someone joins and maybe comes from another type of company. And right now we are focusing more on team by team introductions and education when we can not just, you know, run through some general things, but also to provide some personalized examples, you know, that are more relevant for a specific team.
00:11:39
Speaker
If we're talking about content teams, it's how to measure content. If we're talking about growth team, it's how to measure user and revenue growth. So education is very important generally.
00:11:53
Speaker
And it's hard to to say when it became like that, but at some point of time, ah critical cultural evolution actually came from our leadership.
00:12:08
Speaker
when they started to replace more like opinion based discussions with statements like let's test it. And you know, that helped a lot because when you start to doing it top down, of course it helps.
00:12:26
Speaker
And one more, I think, additional factor here was that we had clear goals for teams. We use objectives and K results framework to set goals.
00:12:42
Speaker
And we also try to decompose these goals on analytics side to provide more clear vectors to the teams, how they can achieve their goals. Right.
00:12:55
Speaker
And once the team know what they need to move, experimentation becomes more like ah valuable tool instead of fancy statistical feature.
00:13:06
Speaker
So it's combination of our efforts and buying from our leadership and changing it top down. Brilliant.
00:13:17
Speaker
I think that you need to sort of a dual-prong attack, work with senior leadership and help them see the the benefits. And I suppose when they have a tangible example have seen real impact in front of them, then become yeah i think that's a great way to to get that buy-in. You can get that aha moment, um which I think is what those teams are ah striving for. And then it sounds like there was a lot of waving the flag around the rest of the business and explaining the benefits and and helping them see the value as well to get all the approach it from both bottom up and and top down to to get full penetration.
00:13:56
Speaker
And as you said, sounds like it's ah it's a constant and and evolving journey as well. Damesha, when it comes to, I suppose, best practices for experimentation? ah How do you ensure that product and analytics are both on the ah both on the same same sheet and on the same page when it comes to, I suppose, aligning what what the best practices are and how they should approach experimentation?
Tools for Experimentation Alignment
00:14:22
Speaker
Because think that's also a cultural challenge in itself.
00:14:25
Speaker
Well, yeah, definitely. I think when we started, just started, it was more like some evolutionary way of doing things, but the team grows and basically it means that, you know, different teams can run but little bit different things differently from process side.
00:14:45
Speaker
So I think we have a few... a few tools that help us to run it more in unified way. So for example, we use Jira for tasks tracking and in Jira, we have a special ticket type, which is called experiment. And basically it has ah specific process behind this ticket type.
00:15:14
Speaker
And all teams try to follow it. This helps a lot. And this was done with with the help of our product ops team that help to more or less standardize the process of running.
00:15:30
Speaker
Then we have template for experiment template documents with all important things to fill in. We can say that it's kind of checklist and so all the teams try to follow it as well.
00:15:46
Speaker
And I think third part, which is really important here, we have our in-house experimentation platform. that is basically used to run and analyze experiments.
00:16:00
Speaker
Again, we can discuss it in more details or you can find some some details about it on our medium because we have, i think, around two two articles about it.
00:16:13
Speaker
We can put some links in the show notes to the documents and the medium articles. Great, yeah. And then you'll be able to check how we work there and how we use experimentation platform.
00:16:26
Speaker
So yeah, basically to wrap up process is more or less standardized using Jira. Some important technical, analytical release specific things are standardized using ah Confluence experiment template doc and experimentation platform is, you know, unified way of running things and analyzing them.
00:16:54
Speaker
Excellent. And Dmitry, you mentioned there, obviously, about the ah platform. I suppose that's the the next area I'm keen to understand.
Technical Infrastructure for Experimentation
00:17:01
Speaker
just I suppose more about the technical aspects and the infrastructure.
00:17:05
Speaker
what What infrastructure and tools do Flow ah rely on and and how do develop them to to support scaling 1500 experiments?
00:17:17
Speaker
Yeah, so in 2019, we actually started from very simple tools and technologies, rather simple, I'd say. So by that time, we already had an internal user profile service.
00:17:35
Speaker
And this service helps to store all important user attributes for each user. And then then these attributes can be used to target features and experiments. And, you know, it helped us a lot because this experiment extension of this user profile didn't take a lot of time and it was more or less almost by design initially, and but a user profile is more like backend thing. And when we're talking about experimentation platform, it's not just backend, it's also front-end.
00:18:15
Speaker
So we have a team that, um, tries to basically the goal is to improve this experimentation platform.
00:18:25
Speaker
We started building it somewhere between 2019 and 2020.
00:18:32
Speaker
I think that in 2019, it was more or less early stage for the markets. So we decided to build its our to build our own platform rather than buying something.
00:18:46
Speaker
think right now you have more options to to choose from. But definitely this combination of two things helped us a lot. Two We, of course, have a lot of not a lot of, but more like specific dashboards dedicated for deeper experiment analysis.
00:19:08
Speaker
So yeah, there's boarding, user profile as targeting and experimentation platform for backend and frontend. These are three main things that we use every day.
00:19:21
Speaker
Brilliant. And I suppose if you were in the position now, it sounds like you've built a lot of your the technology to manage this internally. You alluded to it now, but if you were starting this journey now, would you would you still build this entirely or would you look to to buy? or Yeah, good question. So i'd say that I'd recommend any way to run some evaluation of build versus buy.
00:19:48
Speaker
We did it back in time, but again, it was more less early stage. And I think... Today, it's higher chance to buy something that fits your needs.
00:20:00
Speaker
Yeah. So depends on the context, depends on the company, but I think right now build versus buy becomes a little bit more complicated question because you have more options on the market, right?
00:20:13
Speaker
So I'd say this is inevitable to to run this type of analysis.
Challenges of Scaling Experimentation
00:20:20
Speaker
Brilliant. I suppose yeah you you wouldn't know unless you're deep in the weeds and you don't need to to to deal with that right now.
00:20:27
Speaker
So what are watch some of the biggest challenges you faced when scaling to the the level that that you did when it comes to experimentation and and how did you overcome some of the challenges?
00:20:40
Speaker
Basically, I think I always can can mention a few. Let's try to focus on the most important ones. I think there are a lot of articles talking about balancing quality and velocity.
00:20:56
Speaker
And of course, this is ah common issue for everyone. Common question, open question, how to be So ideally we want to be as quick as possible and have the best quality of course, but that's not always possible.
00:21:11
Speaker
So it's always a trade off around it. Quality can be kind of a separate issue, not just in balancing, but more like, you know, to get trusted results.
00:21:23
Speaker
it's It's always can be a separate one. ah think, um, From not just experimentation as a tool, i can mention more like a thing that I think everyone is tries to tackle is overall strength of hypothesis that we are testing and of course, reporting around it.
00:21:49
Speaker
And maybe more related to our company, again, maybe to do little bit bigger companies is about unified approach for um ah running things to analyze them.
00:22:06
Speaker
It's always a challenge because teams are decentralized and they of course dependent one. from another, but it can be still an issue because they're a little bit isolated as well.
00:22:20
Speaker
So yeah, many of them we can dig deeper into specific life if you want. Yeah, I think the the balancing of quality versus velocity would be so i'd love to expand a bit more on how how do you make that trade off? you have a ah process internally that where you'd look to, I suppose, decide on what sort of trade off you're going to make? I think that that'd be really beneficial to understand. So what you look at to to make that decision.
Balancing Quality and Speed in Testing
00:22:51
Speaker
Yeah, so I think if we're talking about this specific trade-off, velocity and quality, generally we need to, we cannot we cannot slow down.
00:23:06
Speaker
so we optimize both things at the same time. How we ensure the quality, I think we do it um in different ways. so generally it's about unified methodology first. We need to get results that we can trust instead of, you know, getting some random methodology and saying that this is the correct thing.
00:23:33
Speaker
What else? If we're talking balance again, i think We run a lot of different small initiatives, for example, to tackle quality.
00:23:47
Speaker
Around two years ago, we started analyzing experimentation process as we analyze everything else. For example, where we have more failed or we call invalid experiments.
00:24:02
Speaker
Are there any specific teams that struggle with this issue more? Can we somehow group them into, you know, underlying reasons and then tackle reason by reason?
00:24:16
Speaker
Yeah, something like this. For Velocity, we're doing a little bit different thing. So, for example, last year we launched a little bit different process for evaluation of experiments.
00:24:30
Speaker
We call it... for some reason, experiment segmentation, but it means that delegate decision-making for simple experiments completely to stakeholders.
00:24:42
Speaker
And we provide tools for them to be sure that the results you get, you can trust. And these actually helps to speed up things.
00:24:53
Speaker
So yeah, doing a lot of things to balance this. Generally, we are paying a lot of attention to velocity and quality very important as well.
00:25:06
Speaker
So yeah, different directions, different initiatives there. Hope it was clear because yeah, I mean, like but there is no one specific answer to that, I'd say.
00:25:18
Speaker
sounds like um what with what you mentioned about sort of enabling self-serve users with tooling, it sounds like some the more smaller product changes and some of the, I suppose, that the smaller picture changes, which are going to have maybe a lesser impact. you You empower your stakeholders to to be able to run them at a high velocity.
00:25:39
Speaker
And then maybe some of these more bigger sort of transformational changes is where you really sort of double down on um whilst velocity is also important, but quality then becomes that that key key factor because it's going to have such a substantial impact. or is Is that the case?
00:25:55
Speaker
Yeah, exactly, exactly. And I can mention a few things, for example, how we ensure specific things, how we ensure that we are balancing them well. So, for example, sometimes when we need to roll out something quicker, we can use... um Swap testing, as we call it.
00:26:17
Speaker
ah Basically, it means that we launch something with a smaller test group. We rely on proxy metrics only. It means that we have some short metrics that possibly predict long-term outcome.
00:26:32
Speaker
Right? And when we see positive signals, we can swap the groups and basically have smaller control control group and bigger test groups. So test group with change.
00:26:44
Speaker
and leave it for some time to ensure that after long-term metrics are calculated, the results are the same. So I think this is more specific example, how you can ensure that you're testing something something quick, you're allowed something quick, but at the same time, you'll be able to recheck yourself, to recheck the results of a specific change and roll back if something doesn't go well.
00:27:14
Speaker
Amazing. Has there been an instance where you've slowed down and experimentation specifically to ensure quality? yeah Has there been, as say, these bigger sort transformational changes, or or do you tend to make lots more of smaller iterations to to the product? What's, I suppose, that that that decision-making process on how to assess whether it's making big changes, or do you just focus on making more smaller granular changes?
00:27:44
Speaker
So I'd say that we cannot slow down usually and we pushing hard to achieve results. What we actually do is, as I said, we started to analyze, for example, invalid experiments where we had some issues with setup or something like this.
00:28:06
Speaker
So right now we track this metric and we try to keep it a reasonable level, the reasonably small level. And once we see that there's something going on and the metric increases, we deep analyze it and try to fix it.
00:28:23
Speaker
ah Usually it's more like team or specific domain issue. And usually we can tackle it separately. So yeah, we we don't slow down usually. Yeah. But again, we're we're trying to at the same time monitor quality and run some actions to bring the quality back on the level that is acceptable for us.
00:28:48
Speaker
that's That's great. And it sounds like it's become core to to the so the culture and it's all systems go when it comes to to to experimentation.
Decentralized Teams and Fast Experimentation
00:28:58
Speaker
How, Dimitri, have been able to, i suppose, structure your teams in a way that is going to...
00:29:06
Speaker
effectively enable this quick delivery of of experimentation. So I think that's something which i know a lot data teams and leaders I think will be would be interested in in understanding.
00:29:19
Speaker
ah Well, I think this is not just about data or analytics team. I think first it's more about engineering or marketing or something else that helps you to launch things quicker.
00:29:34
Speaker
So I didn't mention it, but I think we've done a lot of actions to help you to actually run things much quicker than before.
00:29:45
Speaker
We have, I think, very high percentage of no code experiments, which are set up through some configs, through some specific things that don't require coding.
00:29:59
Speaker
And it means that, um you know, you can test things without involving engineers for full time. This is, I think the first important things that everyone can follow to ensure that you are quick enough.
00:30:16
Speaker
Then I think we try to decentralize teams and decentralized teams. We're talking about product marketing, basically all the teams.
00:30:27
Speaker
They have their separate goals, their separate objectives and key results, which of course contributes to the company objectives and key results. And if we're talking about analytics team, we use embedded approach.
00:30:43
Speaker
So team members are actually embedded into specific domains and they work there for 80-90% of their time.
00:30:55
Speaker
They help with goal setting and they actually share these goals with the team, which helps to be more focused on you know achieving these goals.
00:31:06
Speaker
And this decentralized embedded structure also helps to be in the full context of what team is focused on how you can help.
00:31:16
Speaker
So yeah, it brings another level of cooperation and collaboration between team members. Yeah, basically, I think these are the main main points I can mention right now.
00:31:29
Speaker
But route there are some smaller things, of course, but I would focus on these ones. Excellent. And you mentioned yeah that you're running 200 experiments simultaneously.
00:31:41
Speaker
How do you coordinate that? How do you avoid conflicts, maintain clarity? And I suppose to compound the question, Dimitri, how do you ensure you're actually getting valuable learnings and effective measurements from these experiments all at once?
Parallel Testing: Pros and Challenges
00:32:01
Speaker
Yeah. So first, um, parallel testing is not bad. Everyone, again, uh, some people think that it's, um, risky.
00:32:12
Speaker
It might be in some way that's true, but parallel testing from my perspective has much more pros than cons. But of course it requires some additional actions and things are to be sure that, you know, you're not affecting other teams, for example.
00:32:33
Speaker
So I think our teams are focused, they work on some specific feature usually. or a set of features, and that's why they control what they launch in this feature, right?
00:32:46
Speaker
So we usually try to avoid conflicts within a specific feature. So some parallel experiments can be run if they only intersect you know evenly and do not affect one another.
00:33:03
Speaker
But it means that, yeah, sometimes we need to test something more like we are testing one and then we are testing another separately just because otherwise it may ruin user experience.
00:33:18
Speaker
We also try to, again, it's more like a lesson learned. We try to align between teams. And if someone knows that something may affect other teams, they usually align with these teams that, i don't know, there might be some effect of cannibalization or there might be some, you know, redistribution of user attention.
00:33:42
Speaker
so again, it's it's more about... how people start understanding what they can change and how they align with other teams on thiss on this.
00:33:57
Speaker
But yeah, sometimes we havet we have some issues with that as well, but this more like rare case, I'd say. Brilliant. and Well, it's been great to hear about how you you've tackled the mammoth task and clearly it's had huge amounts of of impact for you guys at at Flow. I suppose before I let you go, Dmitri, it'd be great to understand what's a piece of advice you would give yourself if you were just starting this journey again.
Adopting Parallel Testing and Hypothesis Development
00:34:28
Speaker
Yeah, so i think the first is...
00:34:34
Speaker
I'd say that it's a long process, but don't worry, you'll come over it. But it's a process. It's not just one time thing that you made and everything becomes very good.
00:34:48
Speaker
But generally, if being serious, I would say that I may have two pieces of advice for everyone that thinks about experimentation and for me as well from, i don't know, 2018.
00:35:06
Speaker
So use parallel testing to speed up your velocity and it actually will help you to speed up everything tenfold if done in right way.
00:35:18
Speaker
And i think second, it's not directly about experimentation as the tool, but more like general advice is constantly push for strong hypothesis.
00:35:34
Speaker
Because again, experimentation it's is just a tool to choose the best from all. But if you can ensure that overall level is higher, your gains will be better as well. So yeah, push for BetSix supporters as well.
00:35:51
Speaker
Sure, but I hope that's helpful. For me, definitely. No, it sounds it. It sounds it from the and the journey that you've been on. there It seems that that's been key to your your success.
Flow's Future: AI and Experiment Analysis
00:36:04
Speaker
So finally, Dimitri, what's next for Flow? Looking ahead, what's um what's what's on the roadmap for for your data journey?
00:36:12
Speaker
A lot of things. Um, well, how we can not mention AI, for example. So yeah, I think we are thinking of adopting AI for decision-making automation.
00:36:28
Speaker
As I mentioned before, some experiments currently are completely managed by stakeholders, but why not to delegate it to AI? Let's check.
00:36:39
Speaker
Maybe that's possible and this will actually speed up everything a lot. What else? I think for quality, we are adopting maybe not new approach for some companies, but definitely new for us. It's meta analysis of experiments.
00:37:00
Speaker
It's when you take corpus of experiments that were, for example, run for six months or something like this, and you try to get insights from it.
00:37:13
Speaker
i don't know, which domains are the best, which hypothesis types are the best, what we should focus on and what we should avoid from doing. So, yeah, this is one of our big bets, definitely. and We are also trying to improve quality through improvements in statistical methodology.
00:37:38
Speaker
So it might be actually a good part to improve both velocity as we will be able to stop experiments earlier, but at the same time, it will give us better quality as well. Alignment metrics, it's always and ongoing issue.
00:37:58
Speaker
And generally we are also switching to experiment as a code approach, which should ensure and improve our quality as well.
00:38:10
Speaker
So yeah, a lot of things. Basically it's, let's say for a year, around a year for us. AI thing may be for a longer time, let's see, but everything changes and um ongoing yeah ongoing, maybe in a year I will give you a completely different answer to that question.
00:38:31
Speaker
Oh, well, brilliant. um I think AI is on everyone's roadmap and finding the right use cases for how they can leverage it in the in the right way, I think is going to be key.
Conclusion and Call for Stories
00:38:42
Speaker
So yeah, sounds like exciting times ahead. And and thank you again for for joining us on the pod. Hopefully you've had a a good first first experience. And yeah, thanks for sharing your insights as to how you guys have scaled experimentation at Flow.
00:38:56
Speaker
Thank you, Fary. Thanks for inviting me. Yeah, I think, I hope it will, it is going to be helpful for some teams that are just starting it. And yeah, for someone who just listens to that, don't hesitate. You can ask me something directly.
00:39:13
Speaker
I can help as well. Brilliant. Well, look, we'll put link to Dimitri's LinkedIn. I'm sure if you're an experimentation with as well, there's a lot of challenges that Flow are facing the that I'm sure you could get involved in.
00:39:27
Speaker
Dimitri would be happy to to answer any questions on experimentation or life on AppFlow, I'm sure. But thanks, everyone, for joining us. We'll see you again in a couple of weeks.
00:39:39
Speaker
Bye-bye. Bye-bye.
00:39:43
Speaker
Well, that's it for this week. Thank you so, so much for tuning in. I really hope you've learned something. ah know I have. The Stacked podcast aims to share real journeys and lessons that empower you and the entire community. Together, we aim to unlock new perspectives and overcome challenges in the ever-evolving landscape of modern data.
00:40:04
Speaker
Today's episode was brought to you by Cognify, the recruitment partner for modern data teams. If you've enjoyed today's episode, hit that follow button to stay updated with our latest releases.
00:40:15
Speaker
More importantly, if you believe this episode could benefit someone you know, please share it with them. We're always on the lookout for new guests who have inspiring stories and valuable lessons to share with our community.
00:40:27
Speaker
If you or someone you know fits that bill, please don't hesitate to reach out. I've been Harry Gollop from Cognify, your host and guide on this data-driven journey. Until next time, over and out.