fbpx

How Invision accidentally increased retention by focusing on virality

Mike Fiorillo | Founder of Optimology and Previous Head of Optimization at Invision

  • | Acquisition | Activation | Engagement | Growth | Metrics | Onboarding | Retention
  • May 2019
  • EP11

A lucky accident

How Invision optimized their way to success

In today’s episode, we have Mike Fiorillo the Founder of Optimology and previously a Growth Product Manager and Head of Optimization at Invision. Invision is a digital product design platform powering the world’s best user experiences.

We chatted about Mike’s experience at Invision and his journey transitioning from conversion rate optimization to product growth when they realized they were hitting diminishing returns on the CRO experiments being run. We discussed how they calculated the sample size required for tests and the statistical significance, how they prioritized experiments to run, and their process of building context for these experiments through quantitative and qualitative research.

We also discussed how testing mitigates opinions and how Invision accidentally increased retention by focusing on virality.

I hope you enjoy this episode!


Highlights

Time
Transitioning from CRO to Product Growth 00:02:45
How to calculate statistical significance and sample size for tests 00:07:45
How to prioritise what experiments to run 00:09:40
Balancing qualitative data with quantitative data when prioritising experiments 00:10:40
How testing mitigates opinions 00:16:20
How to define personas and their jobs to be done through quantitative and qualitative analysis 00:24:20
When to start segmenting onboarding and activation 00:31:30

Don’t miss our weekly episodes

A new episode every Wednesday. Subscribe to receive the latest episodes and exclusive resources

View Churn.fm's Privacy Policy

Leave a Reply

Your email address will not be published. Required fields are marked *

Mike Fiorillo

Founder of Optimology and Previous Head of Optimization at Invision

What Mike is reading right now

About the podcast

My name is Andrew Michael and I started CHURN.FM, as I was tired of hearing stories about some magical silver bullet that solved churn for company X.

In the real world tackling churn and increasing retention is one of the hardest problems a subscription business faces.

In this podcast, you will hear from founders and subscription economy pros who are taking a systematic approach to increase retention and engagement within their organizations.

Transcription

Andrew Michael
Hey, Mike, welcome to the show. How you doing today?

Mike Fiorillo
Hey, Andrew, doing great. Happy to be here. It’s great to have you on the show today, Mike. I mean, we’ve been chatting for quite a while now and I know you’ve gone on to a new journey, yourself starting to work as a growth consultant, which we’ll touch on it a little bit in in a bit. But I want you to talk today about your

Andrew Michael
experience when it came to your time at envision. And for the listeners. Maybe you just want to give us a little bit about what envision is and what was your role at envision?

Mike Fiorillo
Yeah, for sure. So envision is a product design and collaboration platform. So it’s a place where designers can create prototypes of the different web and mobile experiences they may be working on. And they can share those with their team and collaborate on them. And so I joined the company first to had sorrow on the marketing team and later moved into product and help to put together our product growth team. And we’re focused on areas like activation, retention, vitality, etc.

Andrew Michael
and interesting. You said you started out in Sierra, how did you make that transition from Sierra to product manager and growth? And what was some of your earlier responsibilities? How did that transition over time between the roles?

Mike Fiorillo
Yeah, so when I joined and vision had recently launched their enterprise product. So prior to that it was only a self serve, platform. And they launched for enterprise. And the priority became to generate leads from the website and within the product. And there really were not any good touch points within those areas to capture leads. So we started to experiment with calls to action to get an enterprise trial both on the website and in the product and did a lot of iteration around figuring out where the best touch points were to capture leads. And what the BEST OFFERS were to whether it was a scheduling a demo, or getting an enterprise trial. And after a while, we realized that we’re hitting a point of diminishing returns, we’re generating a lot of high quality leads, but the focus really needed to be product usage and user growth. So we we started to turn more towards activation retention. And we found through some of the CRM experiments that were getting really effective, and high impact results within the product. And so we started looking at how can we drive vitality through the product experience, getting users to share more prototypes when they shared them? How can we get the visitors to those prototypes to start collaborating and leaving comments. And so there was just a natural kind of transition from some of the stuff we’re doing on the marketing side into the product.

Andrew Michael
That’s very interesting. And you mentioned as well, like, reach a point of diminishing returns when it came to the experiments. You’re running on the SEO side. How did you get to that point, when you sort of realized that? Yes, I can. Now, whatever we start to do from here, we were not testing the return juicing first. What does it inflection points when you said, Okay, let’s let’s switch focus.

Mike Fiorillo
Yeah, I mean, it really came as a result of running a lot of experiments and seeing that, when we first these tests, we were getting it when we were finding winners, they’d be you know, sometimes 20 30% improvements in the essentially the signup rate on on these landing pages. And after a while, we just we were finding it hard to find statistically significant improvements. And so essentially, we realized that we were probably going to get a better return for investment by focusing on things in the product, to drive activation, and to actually get more users using the product within these target accounts that we’re going after for the enterprise product. Because if we could get more people using the product within one of the country that was then requesting an enterprise demo, it was much easier to sell the tool to a company that had you know, 50 or 100, people already using it, then one that was just kind of starting out with the tool so that we just realized that we were going to get, we’re actually going to be able to monetize better if we focused on product usage.

Andrew Michael
Very interesting. So it sounds as well, a little bit like you have the use case where you have quite a few people may be joining in vision within an organization in bigger companies with not really having an official company account it and then somehow somebody in the organization comes along and says wait a second, we have these multiple accounts going on in vision, we need to bring them together. Is that sort of the case? So that you’re talking about when it comes to enterprise? And is that how you sort of see the natural progression and moving in talks?

Mike Fiorillo
Yeah, so it could have happened in a couple ways. One is, they may have seen on our marketing site, a call to action, get a demo of envision enterprise, and they might have seen us certain messaging about features that they’d be able to access with enterprise that they didn’t currently have in self serve. So that was one way another way was, there would be hitting certain limits within the self serve product. And they would only be able to use those features that they were trying to access. If they upgrade to enterprise, we found that they would typically convert better if it was the latter. So if they were trying to use something like custom workflows, that was only available in the enterprise product, we knew that they had a desire and had motivation to use one of those features that was only available in the enterprise product. So they would typically convert a lot better.

Andrew Michael
Cool. And you mentioned something as well earlier around statistical significance and the diminishing returns on the test. So I want to dive a little bit deeper into that, before we move on to the work that you did on activation and onboarding. When it came these tests, maybe you just want to talk us through the listeners, what a typical test would look like and how you would go about calculating this sequel significance and sample size for the tests that you’re running.

Mike Fiorillo
Yeah, definitely. So essentially, the way I would always do it, as I’d figure out, Okay, how much traffic is currently visiting this page or this specific step in the flow that we’re going to be trying to run experiments on. And what’s the current conversion rate. So essentially, let’s say it’s, you have 1000 people per month, hitting this particular page, and the conversion rate is 10%, you can then plug that into a sample size calculator tells you how much traffic you need to run the test and for how long. And so we’d always do that to ensure that it made sense to run the experiment. And the other piece that I didn’t mention is you need to also have an idea of what the expected improvements going to be on that page. So if you expect you can have a big improvement, your the sample size requirements can be a lot smaller, and see you’re more likely to get to significance faster. Whereas if you think the improvement might only be 1% are really low, it’s going to take quite a while to get to that significant levels. So we’d always want to do that kind of sample size calculation and make sure that it made sense to run the experiment that we’re thinking of doing. And that would go into our prioritization exercises where we figure out what what tests to be running next.

Andrew Michael
And in prioritization as well, did you have any specific frameworks that you went about categorizing the different experiments? Maybe you took us through how you prioritize what to run and win?

Mike Fiorillo
Yeah, definitely. So we used a pretty standard framework. So essentially, the expected impact of the experiment, the probability that it would succeed, and that that probability was based on how much evidence did we have that this test actually made sense to run. So we did a lot of things like user testing, and looking at analytics data to figure out, get some insights around what tests might make the most sense to improve a specific part of the funnel. And then the third piece that we looked at was the effort involved to actually build the experiment. And that would typically involve talking to the engineering team and getting a rough sizing of the work and also thinking about the design effort involved to build something

Andrew Michael
very interesting. And in terms of like the context that you had in the research that went in. So you mentioned you did a bit of user research, and then also looking at licks and feedback coming through how much of a balance was made in between how you prioritize experience between user feedback and analytics? And did they did you use them interchangeably? One have more weight over the other?

Mike Fiorillo
Yeah, so the analytics data would really help us figure out where we want to test and identify where the problem areas were, and and help us estimate the impact that we could have on a test. So we know how many people are visiting this part of the flow, and what’s the existing conversion rate from one step to the next. So analytics would really help us kind of come up with that information. But then, to try to understand why something might be broken, we really had to go to the qualitative research. And that would typically give us more specific ideas about how we might go about fixing the issue.

Andrew Michael
And having the what and the why together really helps give you that full picture. And combining analytics. And feedback really helps add additional context and gives you more confidence as well when running tests. So let’s fast forward a little bit now. And then you moved on from zero, you realize there’s diminishing returns, and you wanted to move into product. And you said you focused on vitality to begin with when in the terms of shared links, you want to talk us through a little bit about that process and what you discovered,

Mike Fiorillo
for sure. So we focused on share links, because we realized that because envisions a collaboration tool so that essentially the way it works is that a designer would sign up for an envision account, they would create a prototype, build it, make it functional, and then they would share it for feedback they share within the company. And so we typically saw a lot more traffic visiting share links than we would see in the core product, because one designer might share it with 10 or 20 people. So we realized we were getting all this traffic from share links. But we were not necessarily converting those users, those anonymous visitors to share links into actual registered product users. So we started running some experiments on share links themselves. And the way that we could convert them was we would have to get those people to collaborate on the prototype. That was typically through commenting, we learned pretty quick that a lot of users did not even realize that they could leave comments, some prototypes. And that was in some ways that was a byproduct of how prototyping and envision works, it’s really meant to be something that looks like a real product. So you don’t necessarily want a lot of envisions UI to be on top of the prototype. You want it to look and feel real. But we also realize that there is a bit of a problem and that people just did not know they could leave comments. So one very quick experiment that we ran was we just added a tool-tip on top of the comment, toggle switch, and just made it made people aware that they could start collaborating on this by enabling the comment mode. And that alone, there is a 15% improvement in collaboration behavior on prototypes just through that one simple experiment. And so we realized that there was a lot of low hanging fruit that we could tackle related to collaboration. And so we did some other stuff. So as an example, users were not required to create accounts to leave comments. And because of that, we were having a hard time retaining them and making them part of the organization’s envision account. So we thought, Well, why don’t we try this requires requiring everybody to create an account in order to post a comment. And there was a little bit of resistance to this, because the general conception was that if we did that, we would hurt the engagement rate. So people would not want to leave comments if they had to sign up for an account. But we said, okay, why don’t we just test it and see what happens. And we ran the test. And we saw that it actually had very little impact on engagement. So there was a slight drop, but it was only maybe three or 4% on the comment rate. But we had a massive improvement on the registration rate, through share link. So that was up, like over 100%. So we realized, okay, the benefit of requiring people to sign up to comment definitely outweighs the cost. And so we ended up implementing that. And the interesting thing is, even though we’re not really setting out to improve retention, per se, we found we did want to measure whether these were quality users coming in through content, mentoring on share links. And when we compared the cohort of the users who were required to sign up to post a comment versus the ones that were not, we actually saw we were improving retention, they were significantly more retained over time when they were actually creating accounts.

Andrew Michael
Well, that’s interesting on a number of different levels, I think the first thing is, it almost sounds like you tend to agree with the skepticism of like hurting engagement. But I think you mentioned one crucial thing that’s really important was, let’s just test it and see what happens. And this is often not something that’s easily done. It’s easily said, but it’s not easily done. How much of like a testing culture Do you have it envisioned and how sort of what sort of was the appetite like for risk when it came to your experiment you’re running. So the,

Mike Fiorillo
the company was, was quite open to testing. However, because it’s a design platform, and the audience are designers, we really had to make sure that there was a high-quality bar for the amount of design polish and the experiments that we were running. And so as long as we could show that we were building experiments that were well designed, and we’re not hurting the user experience, and in general, improving the user experience, the company was pretty open to running a lot of tests in different areas.

Andrew Michael
Yeah, I can definitely see that as well. You have a panel of judges judging your every design. And they’re all designers.

Mike Fiorillo
Yeah. And they’re all pretty active on Twitter.

Andrew Michael
I can imagine, you mentioned it as well, then that you saw that there was an increase in retention, when it came to these specific users was that sort of like a by design was something that you’d like realize, just going through this initiative and trying to increase the viral it?

Mike Fiorillo
Yeah, it was, it was really kind of something that we didn’t even really consider At first, we were trying to improve the morality. And in order to justify that the experiment made sense to implement for all users, we want to make sure we were showing that we were increasing the quality of the users were getting. And so we’re looking at retention was really a way to, to look at the quality of the users we were getting and whether they were coming back to the product. So it’s one thing to just get people to sign up and leave a comment and never come back. It’s another thing to actually get them to start collaborating more actively over time. And so yeah, we were just looking at making sure we were doing the ladder. And it turned out that we were getting people to come back more often over time.

Andrew Michael
Yeah. And it seems a little bit obvious as well. But did you see sort of any correlation between the number of active users in an account and then the overall counts retention over time, and was almost a point where there was some maybe some diminishing returns in that as well, we’re also certain point, it didn’t really matter how many people joined that account?

Mike Fiorillo
Yeah, definitely, there was a correlation between the number of active users and an account and the retention of that account. And that’s part of the reason that on the enterprise side, the retention was extremely high. And there is actually the retention, revenue retention was over 100%, on the enterprise side, due to expansion. And so 100%, we knew that if we could get more people within a company using envision, they were more likely to convert.

Our assumption was always the more people we could get in the account, the better. So we were never satisfied. When we hit a certain number of active users with an account, we were always trying to generate more active users within that company. So anybody who might have been involved in the design process, which could be hundreds of people within a company, if you include everybody that’s looking at prototypes, and all the stakeholders, we wanted them all to be using the product.

Andrew Michael
envisioned as a product for the whole company. interesting in that as well, that so you’re trying to onboard new users all the time within an organization, you’re doing things to try and encourage share links and people signing up. Did you sort of treat any one an organization differently when it came to the use of the product? Because Agilent, then they’ll have different use cases. So if you have a stakeholder who’s in viewing a prototype, is use cases obviously going to be very different to the designer who designed the prototype. So how did you treat different users when it came to your onboarding and activation efforts?

Mike Fiorillo
Yeah, that’s a really interesting question, because that became a big priority. Over time on our growth team, because envision started releasing features that were not necessarily geared only towards designers, which originally was the case. So an example of that was inspect that was a tool to allow developers to inspect prototypes to get things like CSS codes and values that they could use as they were developing the actual designs that the designers were offering them in the prototypes. And so we started seeing after releasing that product of really high percentage of developers signing up for vision. And then we also saw a lot of product managers over time starting to sign up, and they want to create their own prototypes. And envision also released a tool called free hand, which was a way to create lower Adele, the wireframes and sketches and things along those lines. And that was starting to attract a wider audience. So it became clear over time that we needed to improve the onboarding, so that it worked not just for designers, but different personas as well. And that really centered around a research project we did to understand what the jobs to be done were of these different personas, and how, how they were going to get value out of the products. And we essentially rebuilt the entire onboarding flow to one understand who the person was that was signing up, whether they were designer, developer, a product manager, etc. And in addition to that, we want to understand what their goal was for signing up for the product. So it could be they were creating a pretty prototype from scratch. And they did not have a design yet. It could be that they already had screens designed. And they just wanted to turn those into an interactive prototype. Or it could be that they were just brainstorming new ideas. So we would ask them what their goal was. And then based on that answer, and based on who they were, they would get a completely different onboarding flow. And so just as a quick example, somebody who is creating a prototype from scratch and didn’t have any designs yet, it didn’t make sense to drop them into the web product to build a prototype because that assumes they already had the screens designed. What they had to do was design their screens in a tool like envision studio, which was a screen design tool that we released, or sketch using our craft plugin. And so the next step they would get in the onboarding was to download one of those tools. And we make it very clear to them that they had to first design their screens in they’re desktop, and then sync them to envision through one of those two tools that they downloaded. So we found that by tailoring the onboarding, based on the persona and the job to be done, we had a much higher activation rate than previously where everybody was getting the same onboarding flow.

Andrew Michael
Yeah, I think as well, when it comes to personas and jobs to be done specifically, as well, it’s, it tends to be maybe a little bit more difficult to easier said, than done. It’s one of those things as well, that requires quite a bit of effort to actually get right, because you end up trying to see what how do backward users in two different personas when it comes to what does from the jobs that actually trying to be achieved by the specific sets of users. And then you can almost go down an endless path when it comes to the different use cases. So what was the process that you took to understand what these jobs to be done? Where when it came to adding them to your onboarding flow?

Mike Fiorillo
Yeah, so there were essentially two parts to that. So our data team did a bunch of work to understand what the right activation metrics were for different personas. So in the past, we had, we had one main activation metric, which was creating a prototype and sharing it. But that obviously is not the right metric for somebody who might be getting invited to view a prototype and may just want to comment on it. So the the data team looked at various personas, and they looked, we weren’t asking people what their role was. But using tools like clear bit, were able to infer what the role was for all our not all, but probably about half of our users. And so the data team essentially came up with a few different activation metrics and said, Okay, if somebody is a developer, they’re more likely to be retained if they use inspect, and they access CSS codes on a prototype, versus a designer who would be creating something and showing it versus a stakeholder or an executive who may be just leaving a comment on a prototype. So that was one piece of the puzzle. And then the other piece was doing qualitative research. And we had a UX researcher, who did a bunch of customer interviews, to understand what a user’s job to be done was right after they signed up. So we would target people who just signed up for the product, interview them and really understand, ask them things like what was your main goal for signing up? What were you looking to do? How did this need arise? And how did you hear about us? And what was your first experience, like things along those lines?

Andrew Michael
Yeah, you mentioned that well, using the service clear, but for the listeners club, it’s a service. It’s an API that allows you to enrich your data and get more insights on your users. So they have quite a lot of information around their job titles, company, company size location, that you can enrich your data automatically with, I find it really interesting as well, that you took this approach by not actually asking the users because this is something that came up in the pOH interview that typically users don’t really have do a good job of selecting, like a role or persona, because it tends to be quite difficult in to categorize it. So if you working in product, but you also on the growth team, but you know, product growth, it gets a little bit complicated. And that’s just one example. But just where you see yourself sitting. So how did you actually then go about this? So you pulled in all the data through clear, but you had around 50% of your users with enrich data? What is the process of going around, like classifying users in the different roles, again, because you needed to do on your end, even though you’re playing the roles from, like, their sources, their public sources? How did you classify them?

Mike Fiorillo
Yeah, so our data team had scripts that essentially would say, because the clear roles were quite specific, so it could be like UX developer, UX designer, or front end developer or things like that. So they had scripts that essentially said, Okay, if it’s one of these 10 titles, let’s just classify them as a developer, and so on, so forth for the different key personas. So yeah, that was essentially left to them, thankfully, to figure that problem out. Yeah.

Andrew Michael
But it’s definitely a very interesting approach as well. And then next up, you had you mentioned you had your user researcher, UX researcher working and doing customer interviews on the jobs to be done side. From there, then would you take the specific personas that you had highlighted, you have your jobs to be done, and that’s was from the basis of the different onboarding. But how did you then prioritize which flow and setup you wanted to optimize? First? Because obviously, they I mentioned this quite a few different use cases and flows. What was the process in that?

Mike Fiorillo
Yeah. So essentially, in terms of the persona specific onboarding, that was looking at what the top roles were, that were signing up. And that was, by far, that was the designers and developers second. And the other piece to that was figuring out, okay, what goal did people have when they were signing up, and there were essentially three or four kind of main goals. So it could be creating a prototype, it could be brainstorming, it could be you, people wanting to collaborate on design. And so we essentially bucketing those goals into three or four main goals. And then there were for a couple of those, there were sub-goals. So by far prototyping was the number one goal people were signing up for, for envision, to accomplish. And so we only had a sub goal for for that and the sub-goals, were sending it out for user testing. So creating a prototype to then run usability tests on or creating a prototype to send to their team for feedback or creating one to present live in a in a presentation. And based on those sub-goals, were able then to ensure that when they created the prototype, we’re informing them and educating them about the right features to use to accomplish their sub-goal. So the reason we didn’t do the sub goals for some of the other goals were that they were just less common. And so if somebody wants to brainstorm ideas, we had a couple of tools available for that free handed boards. And so the next step was really just to give them the option to use one of those two tools. And there wasn’t much additional value to figure out what they want to do with their brainstorming and, and eventually, it might make sense to personalize even deeper, but for us, we started with the main personas, and the main use case,

Andrew Michael
and not from them. And you mentioned his role, then you looked at activation metrics and sought to understand between the different personas and jobs to be done, that there was obviously been different activation metrics that you’ll be looking at. And I think you took the typical approach as well, in the beginning, which was just pick one sort of key activation metric that you see tends to work well across the broad audience, and then you know, and then going forward, but at what stage, did you sort of like a stage of the company and the growth and the size of the team actually decided to make that switch away from that single sort of metric of getting people to set up a prototype? To the moment where you realize, okay, we actually now need to get a little bit more granular because people are using us for different reasons. And the definition of activation is different as users.

Mike Fiorillo
Yeah, so the main catalyst for that was just looking at the different roles of people signing up and realizing that as envisioned, was launching new features that were moving beyond just features targeting designers, there were a lot of non designers signing up for the product. And also seeing that the activation rate and the retention rate from some of those non designer personas was significantly lower. So there was an opportunity to, like we were attaining activating and retaining designers question well, because the product had been optimized for that audience from the beginning. But it was really clear that there was an opportunity to do a much better job in providing a better experience to other types of users that were signing up.

Andrew Michael
Yeah. And I think as well, like this is one thing, it’s it’s a little bit of a challenge as well, in the beginning for normally for smaller companies is having one sort of metric when it comes to activation to follow and work with allows you to allow the team to get a little bit more focus. The question is always asking a little bit about when you decided, Okay, let’s we realize this, now, we need to give a little bit more focus to the different types of cases, what sort of size was the team that you’re working within the growth team, had you expanded like in terms of headcount as well, that allowed you to be able to experiment more and execute on these different metrics.

Mike Fiorillo
So the team had grown a little bit, and we had two product managers, about five engineers, a dedicated designer, and data analyst working with us, and we were all working the VP of growth. And so we did have the ability to have part of the team focused on things like vitality or building our growth Foundation, building the growth stack, and another part of the team focused on these onboarding experiments. So it did become easier to tackle multiple problems at once over time, and that was just kind of the evolution of how that happened. And, you know, realizing that there was this need to really improve the user onboarding.

Andrew Michael
Yeah. Cool. So we talked a little bit about activation onboarding, a little bit around acquisition? And was there anything that you did when it came to looking at retention? And turn over all that didn’t work that you tried?

Mike Fiorillo
Yeah, so one, one good example of something that didn’t work was we tried to build a cancellation flow within the product, where we ask people what their reason was for canceling their account. And based on the reason they gave us, we would then have an additional step that tried to provide some messaging and counter arguments and counter those objections that they gave us to see if they may want to maintain their subscription. And this is an idea that that came from looking at some of the cancellation flows of other best in class SAS companies like Dropbox and a few others. And we felt like we could have, we could make somewhat of a dent in the current cancellation rate. But by doing this, it turns out, we had pretty much no impact on the cancellation rate. And I think it kind of goes to show you that if you’re trying to fix retention and churn, you really need to do it earlier on in the in the users lifecycle, and you just make sure they’re getting good value right from the beginning. Because if you wait till the very end, and they’ve already decided they want to cancel, there’s really not a lot you can do to influence that decision to now. Yeah, I mean, I think it is pretty late to be trying to do that. The, I think that it could work for certain companies. So Dropbox, as an example, have that kind of flow, and they’re dealing with a massive volume of accounts. And so they if they could get a 1% drop in the cancellation rate that could be you know, millions of dollars in revenue for them. So I think could still make sense at a certain scale. But if you’re not at a massive scale, it’s really hard to even be able to show a statistically significant improvement by running that kind of experiment. So it’s probably better to use your resources to focus on something earlier on like optimizing adoption, or trying to just increase engagement of already active users.

Andrew Michael
Yeah, absolutely. And I think it goes down, it’s already just taking the user psychology into mind as well, like when a user joins your product, they’re really excited they want to get started, they have a problem. And they feel that your solution is that solve it. Whereas at the end, where someone’s actually consciously coming into your product and trying to cancel it, you can already see that you’re not you haven’t been delivering the value to them that it’s not in solving their problem and actually going through their foot of not trying to cancel the account. So definitely, like focusing their efforts when users are most excited and ambitious about your product is definitely a much better as end of time. So just talking a little bit about your time. And also Michael obviously, you’ve moved on from envisioned and as mentioned the beginning, you started going into a bit of growth consulting, maybe you just want to give us a brief overview of what you’re doing at the moment services, anything that’s interesting and relevant to the listeners.

Mike Fiorillo
Yeah, for sure. So I started a company called ophthalmology after leaving envision. And the idea with that is I found working on growth, that often the best ideas would come out of user research, and really spending a lot of time to understand where the areas of opportunity, we’re looking at analytics, doing user testing surveys. And unfortunately, it can be hard to find the time to do that when you’re working as a product manager, and you may have access to a user researcher, but they’re probably not going to be dedicated to growth. So the idea with optimism is really to provide that research to companies and essentially come up with a backlog of experiments they could run that I help them complete, to identify, you know, the areas of impact and come up with specific growth experiments to run. And so basically, I’m offering two things. One is a conversion audit, where I’ll spend about three to four weeks doing a bunch of research to come up with growth experiments. And the other. The other service I’m offering is just general kind of growth consulting, to help with building growth practice within the company, and hiring and setting up growth foundations and tools and things along those lines. So yeah, it’s been a lot of fun working with a lot of different types of clients, and learning what what works for specific kinds of clients that might not work for others. So yeah, it’s been a great experience.

Andrew Michael
What’s been the biggest difference in biggest shift moving from in house team working in envision now to working as an outside consultant?

Mike Fiorillo
Well, I’d say the biggest shift is that, and this is kind of true and consulting in general is that you you can work and help on strategy, but you’re not necessarily that heavily involved in the execution. So it’s, it’s fun in growth, to be able to come up with ideas and also run them and and see what works and what doesn’t. Yeah, and when you’re consulting, you’re not necessarily going to be involved in actually seeing which ideas end up making it into the product. So that’s, I guess that’s probably the biggest difference is that you’re just not as engaged on the on the execution side.

Andrew Michael
Yeah, I can see that industry go and sign up for the different products that you work for. So then you can then see any change happening.

Mike Fiorillo
Exactly. Exactly.

Andrew Michael
Cool. Well, Mike, it’s been a pleasure having you on the show today. Really, really appreciate the time and some great insights for the listeners. So thanks very much for joining and I wish you best of luck going forward now and then your journey.

Mike Fiorillo
Okay, thanks Andrew and keep up the great work with the podcast.

Andrew Michael
Thanks.

Mike Fiorillo
Have a good one.