…Hello, everyone, and welcome back to the Canopy, where we talk GEO, founder life, and growing your visibility in AI search. I'm Jen Shannon, your host and founder of Clemelopy. Today's episode is a big one. I'm guessing you can already tell by the title, but in this episode, I'm unpacking my first month results from Clemelopy's case study on Clemelopy itself. That's right. I set out on a journey to show you how Clemelopy can optimize itself…using itself. And if you're just joining me on this ride, you can take a look at Chlamelopoe's baseline study video over on our YouTube channel, which I will link in the show notes. I didn't release it as a podcast because at the time, I didn't have a podcast. It didn't exist. And, well, now I'm looking to move forward, not backward. So we're just gonna start it from here on the podcast. So, yes, I am documenting my own GEO journey in real time, one month at a time, so you can watch the strategy unfold and see the actual numbers as they move. No polishing it up after the fact. No waiting until it's a success story. You are along for the ride from the very beginning. But before I dive into the results, let me give you a little recap on our case studies. I'll start with the why. Optimizing for AI search, also known as generative engine optimization, is a very new field. And when I say new, I mean, we are all figuring this out in real time. There are no ten year case studies to pull from. There's no industry benchmarks that have stood the test of time. Nothing. Not only did I wanna create proof of concept for Clemelopy itself, but I wanted to be able to show the entire process almost like a makeover so that at the end, we can have proof of concept, proof points, pinpoint what worked, improve our tools and processes over time, and show how GEO is a long game without overnight results. In addition to that, when I created Clemelopy's website, I intentionally did not optimize it for AI search so that you could watch and follow along side by side with me. Now I know what you're thinking. Yeah. Yeah. Clemelopy will have its own case study, and, of course, it's gonna be great and all that stuff. But we have several case study participants we are also following. You've likely already seen or listened to Dana's baseline. Dana is a photographer located here in Jacksonville, Florida. Dana Goodson Photography in case you wanna look her up. You're also going to meet someone named Jill Stonier very soon, and then Susie with Made with Kindness Cleaning. I'm also beginning a case study on my other business, Jen Shannon Creative Atelier, but that case study is for a different purpose and with a different strategy that I'll go into at another time. So these are real businesses with real websites, and we are doing this together to see what results we can garner from it. All of that being said, let me tell you a little bit about what we are measuring. First, we are measuring referral traffic from ChatGPT, Claude, Perplexity, Grok, and Gemini. As of now, there is no way to truly measure how much your business is being recommended within these AI searches. And the reason is because these AI search companies are their own businesses, and they own their own data and their own tools. And allowing an outside platform like Google Analytics to tap into that data and make it publicly visible would be a massive decision and one that none of them have made yet. That would be like a bakery having, like, the best sourdough bread, and anyone who wanted could just walk in and take the recipe and then use it to either make their own bread or make it for everybody else. So the only way to measure currently is to measure the traffic coming to your website from their website. So that's what we are tracking for this first metric. The next metric we are measuring is what we call share of model. This is where we define our business, our main competitors, a list of questions or statements that we believe someone could be using to search in these AI engines and where we would like our business to appear and be recommended. We then go to these AI sites and prompt them with the same questions or statements across all of them and track their responses, including who they are mentioning and recommending, how often, and in what position they show up. So we also look at things like whether the AI is citing from your site, whether the mention is a recommendation, and how you stack up against competitors. The higher your share of model, the more AI is treating your brand as a trusted… orchard audit, which can be run on either a single page on your website or on your website as a whole. And we score these sites and pages based on six categories of criteria, and they clearly pinpoint what is wrong and where those issues need to be fixed. Now this last one isn't a metric, but it is a tool that we use for analyzing data. So we use our Orchard builder that analyzes your page and makes optimization recommendations based on our Orchard ecosystem framework, which is our proprietary framework. And we take those optimization recommendations and implement what makes sense to implement and then measure our progress through analytics, share model, and Orchard audits. Now you probably caught that in the last sentence or two that we take those optimization recommendations and implement what makes sense. And I wanna give you an example of a recommendation that the orchard builder may give that wouldn't make sense to implement. Because Chlamelopy is such a new business in sight, we don't have a lot of reviews. One recommendation was to add more social proof in the form of case studies or reviews, testimonials, so on and so forth. Well, we don't really have much for that, so we implemented what we could. But beyond that, we just have to keep working to grow our user base to gain those reviews and continue allowing time for our key studies to show proof. Each month, we check-in, run the numbers, and show you exactly where things stand, what's improving, what's not, and what we're doing about it. And we document all of it here on the podcast and over on our YouTube channel if you wanna watch along. The goal isn't to show you a perfect success story at the end of the twelve months. The goal is to show the work as it happens so you can apply the same thinking to your own business and, again, show proof of concept. But really quickly, before we get into the results for month one, I want to let you know that this podcast is brought to you by Clemelopy. If you've been wondering why your leads have slowed down even though your work hasn't changed, I want you to know there's actually a reason for that, and it's probably not what you think. I offer a website visibility audit and strategy consultation where I go through your site, figure out what's getting in the way of AI search engines understanding what you do, and then hand you a clear, prioritized plan to fix it, including a one hour strategy call to show you the clear path forward and allow you to ask questions. It starts with a free fifteen minute call, and it's not a sales call. Don't worry. Just a conversation to see if this consultation makes sense for you and where you are in your business. This is a great option if, as a business owner, you don't have the time, energy, brain space, or desire to figure this out using our tools. You can book your free fifteen minute call at Clemelopy.com/workwithus. Alright. So let me set the stage. The first thing I did to start the case study was gather our stats. It's actually quite simple. We had no shared model, no AI referral traffic, and no citations. Our full site audit wasn't terrible, but it was an eighty two out of a hundred. I always recommend starting with the home page when it comes to optimization… and there really is no scientific reason for it other than it is my personal opinion that the home page is like your business' storefront but on the Internet. And to me, that's the most important thing for getting people to want to come inside. I ran an audit on our homepage, and it was pretty terrible. It was seventy one out of a hundred. I then ran our Orchard Builder and used the results to implement its optimization recommendations and then waited to see at month one how those changes seem to have affected our measurements. And here is where we get into the nitty gritty of what I actually went in and did. The first thing I did was added pillar pages for Clemelopy's core canonical pillars. I I know that that sounds like some big technical jargon, but basically these are the main topics Clemelopy represents expertise on. These give the site a topical architecture that AI can actually follow. So for example, Clemelopy's core canonical pillars are GEO, content strategy, search evolution, small business growth, brand authority, and AI visibility. So every piece of content that we create on our website needs to be able to roll up to one of these pillars. They not only anchor who we are and what we do, but they also show what we can provide expertise on. So I created a single page for each of these pillars on my site and created them so that the URL would be exactly those terms and then created the content for each of those pages. I'll circle back in just a minute as to why creating these pillar pages was important if what we were actually supposed to be focusing on was the home page, but I'll get to that in a minute. Now the next thing I did was I added schema markup across all pages. Schema is one of the top flags in the baseline audit and had come back as a failure, which means AI had no structured data to grab on to. Schema is basically a way of labeling your content so that AI and search engines can understand what something is, not just what it says. So I went in and fixed that across the board, including on the home page. I also created a global FAQ page because this was also flagged in the baseline. FAQ pages matter a whole lot for GEO because… AI is literally answering questions all day every day, and having a page that maps to that format gives it something to pull from. So here's where I circle back as to why the pillar pagers were important for optimizing the home page. One of the major parts of optimization is what we call contextual links. These aren't your footer links. They're not your navigation links. These are your internal and external links that you link to within your actual page content. Internal links being the links between pages on your site and external being a separate domain that is not your content. So I added contextual internal links throughout the copy, pointed those to the new pillar pages, and that was it for the internal linking. I also added external links through a data carousel that actually links out to supporting studies because citing sources signals credibility and authority to AI systems. As I mentioned earlier, testimonials were flagged as a gap as well. I don't have a lot of them, and I'm in my first two months of business. So, um, it is a real constraint. I'm not going to manufacture results to check a box, and I think naming that is exactly what makes a case study worth following. You can see the real gaps alongside the real wins. This also is a good spot to mention that GEO isn't a one and done thing. As AI systems change and we begin to collect data through these case studies, we'll be able to pinpoint and pivot in our strategies, so checking a box doesn't mean it stays checked. Alright. Are you still with me? Did I lose you? I hope not. But let's continue. So I ran the reports for share of model, orchard audit site, and orchard audit page. This is a true apples to apples comparison, and here's the breakdown. So when we first launched the case study on February ninth, the home page was sitting at a grade c, seventy one out of a hundred. There were five things that failed, three warnings, and thirteen passed… And now at our one month check-in, the home page is at a grade a, eighty seven out of a hundred. There was only one failure remaining, four warnings, and sixteen passes. So our grade went up sixteen points. Our failures moved down five to one. We gained one warning, and we gained three additional passes. Now let me walk you through what actually moved in the audit. I mentioned earlier that we use six categories to score an audit. Those categories are clarity, authority, structure, AI readability, media, and accessibility. So our authority went from twenty percent all the way up to eighty seven percent. That is a big jump, and it was driven by three things, adding schema, building out the pillar pages, and bringing in external citations, and then linking to those citation sources. Those three things together told AI that this site has depth. It has structure, and it's connected to a broader body of knowledge. And I wanna remind you that these are specific to the homepage itself. So, again, this is not the full site audit. This is just the homepage audit. So the authority went up from twenty percent all the way to eighty seven percent. I am so excited about that number. Next, the structure went from sixty five percent up to ninety percent. Clarity held at a hundred percent, which was great. The one failure that's still remaining is a form field that is missing a label, which is actually an accessibility fix, not a GEO fix. So it's a quick thing to address. I just may have dropped the ball on remembering to do that. Oops. The remaining flags are things like limited proof of expertise and add more specific facts. Those are honest reflections of where the site still needs to grow, and you simply can't manufacture case study results or testimonials in one month. The tension is real, it's relatable, and it's part of the story. So those are the homepage results. Next is the full site audit results. The baseline audit results on the full site audit was scored at eighty two out of a hundred, which was a grade a. Not too shabby, but it had thirty two failures, a hundred and eleven warnings, and three hundred and five checks passed. For our one month check-in, the full site audit score went to eighty five out of a hundred. So up from eighty two to eighty five, still a grade a. And there are twelve failures, which is down from thirty two, ninety three warnings, which is down from a hundred and eleven, and three hundred and thirteen checks passed up from three hundred and five. So, yes, the overall score only moved three points, but this is actually a pretty big deal because we really only optimized the home page, though adding the pillar pages and schema are also a big plus. The biggest takeaway here is that failures dropped from thirty two to twelve. That's twenty failures cleaned up this month. The next was share of model. Now I'm not gonna sugarcoat it. I'm just gonna come out and say it. There was no movement there. Still zero. But, again, this only measures based on the query set that you give the model and track. I don't expect movement right away. This is, of all of the things, the slowest moving metric in GEO, and really that's by design. It takes time to build enough authority and enough presence that models are trained on your name and your reputation. So it's a long game. And I, of course, will continue to report on it honestly every single month. And when there's something to show, you will definitely be the first to know. But I saved the best for last. Do you remember what the last metric was that we were tracking? No? Alright. Fine. I'll give it to you. It's analytics… So I'm so excited about this part. I need you to stay with me here because this is the whole point. From roughly November first through February ninth, that's about three months, the entire period from when I started building the site through the launch and then almost a month after launch, there was zero AI traffic, referral traffic. Zero. No perplexity, no chat GPT, no Claude, no Grock, no Gemini, nothing… In month one alone, from February ninth through March tenth, again, after making these optimizations…perplexity accounted for twenty five sessions…twenty one percent of all the traffic, which made it the number two traffic source for the entire month, right behind Direct Traffic. And ChatGPT showed up for the first time too, just once, but it showed up. But let that sink in for a second. Three months…of working towards building and building and building and finally going live…January thirteenth, and then…kinda just letting it ride for about a month and then starting the case study in February to get that baseline…and then optimizing. And in one month alone, we go from zero AI traffic to twenty five visits from perplexity…This is huge. Huge. Perplexity is indexing the site. It's understanding what the site is about, and it is surfacing Clemelopy in answers to real queries that real people are asking. And that's twenty five people who clicked through from Perplexity to my website. That number doesn't fully represent just how many times I surfaced within Perplexity, just the number that clicked through from Perplexity. That, my friends, is the GEO signal. That is what we are working towards, and I could not be more thrilled. The work during the past month, the schema, pillar pages, the FAQ, the linkings, the citations, and so on and so forth. That is what made this possible. Now can I draw a direct line from one specific thing I did to those twenty five perplexity referrals? Not yet. The timing is significant, though, and it is absolutely worth watching closely. And here's one more thing I want to name. Direct traffic dropped from eighty eight percent of all traffic down to sixty four percent. And if you don't know what that means, it might sound like bad news, but it's not. It means the ecosystem that I'm building is starting to activate. Other channels are beginning to carry some of that weight, and that is exactly what we want. So are we feeling good? I know I am. So here's what I'm gonna be doing for month two. I'm going to finish trying to fully optimize the home page, including those labels that I forgot to take care of. And because I've added some things to Clemelopy, I'll likely also update the home page to include those things, including following along with the podcast and the case studies. Now I also analyzed my website traffic a little deeper, and I learned that aside from the home page, the next most visited page on the site was the twenty twenty six GEO playbook page followed by the case studies page. Of the hundred and twenty two sessions, sixty went direct to the home page followed by fifteen to the playbook and seven to the case studies. So my next move will be to run Orchard audits on those two pages and Orchard builders on those two pages and then use the results from that builder to optimize those pages accordingly. And I wanna say one honest thing before I close out. Twenty five sessions from Perplexity in the first month genuinely surprised me. I expected progress. I did not expect it that fast. I thought maybe I'll have one from chat GPT. I honestly wasn't even thinking that I would have any from perplexity. And I think it's worth saying out loud instead of just burying it in a graph somewhere. This is the whole point of following along. I'm not showing you a polished success story after the fact or some glamorous makeover. I'm showing you the real work that goes into it in real time, month by month with the actual numbers, including the gaps so that you can do the same thing for your business. And that's it. That's it for our month one Clemelopy case study check-in. If you wanna follow along, make sure you are subscribed to the Canopy. Feel free to follow along over on our YouTube, again, which I will link in the show notes, and that will make sure that you don't miss the month two check-in. And if you want to see the actual audit screenshots and traffic data, head over to our YouTube playlist for Clemelopy Case Study, which will also be linked in the show notes. Well, my friends, thank you so much for being here, and as always, keep growing forward …