Clemelopy Case Study – Month One Results
The first month results from Clemelopy's own GEO case study — schema, pillar pages, internal linking, and the moment Perplexity showed up for the first time.
Episode Summary
In this episode, Jen Shannon unpacks the Month One results from Clemelopy's self-directed GEO case study — the one where Clemelopy optimizes itself using its own tools, in real time, with the actual numbers on display.
This is not a polished success story. It's a live experiment, documented month by month, so you can watch the strategy unfold and see what Generative Engine Optimization actually looks like for a brand new website.
Before diving into results, Jen gives a full recap of how the case studies work — what's being measured, why, and how the data is captured across Share of Model, AI referral traffic, and Orchard Audit scores. She also introduces the other businesses being followed: Dana Goodson Photography, Jill Stonier, Maid with Kindness, and Jen Shannon Creative Atelier.
Then come the numbers. The homepage audit jumped from a C (71/100) to an A (87/100) in one month. Authority alone went from 20% to 87% — driven by schema markup, new pillar pages, and external citations. Full site failures dropped from 32 to 12. And Share of Model? Still zero — which Jen names honestly, because that's the whole point.
But the headline result is AI referral traffic. From November through February, there was zero AI traffic. Zero. Then in Month One alone, after optimizations, Perplexity drove 25 sessions — 21% of all traffic for the month, making it the number two source behind direct traffic. ChatGPT showed up for the first time too.
This episode covers:
How the Clemelopy case study works and why the site was intentionally not optimized at launch
What was measured: Share of Model, AI referral traffic, and Orchard Audit scores
The exact optimizations made in Month One: schema, pillar pages, FAQ, internal linking, external citations
Homepage audit results: from 71 to 87, authority from 20% to 87%
Full site audit: failures dropped from 32 to 12
Share of Model: still zero, and why that's expected
The AI traffic result that genuinely surprised Jen — 25 Perplexity sessions in one month after three months of zero
What Month Two will focus on
Why direct traffic dropping from 88% to 64% is actually great news
If you've ever wondered what GEO looks like in practice — the real work, the real numbers, the real gaps — this is the episode.
Transcript
Hello, everyone, and welcome back to the Canopy, where we talk GEO, founder life, and growing your visibility in AI search. I'm Jen Shannon, your host and founder of Clemelopy. Today's episode is a big one. I'm guessing you can already tell by the title, but in this episode, I'm unpacking my first month results from Clemelopy's case study on Clemelopy itself. That's right. I set out on a journey to show you how Clemelopy can optimize itself using itself.
And if you're just joining me on this ride, you can take a look at Clemelopy's baseline study video over on our YouTube channel, which I will link in the show notes. I didn't release it as a podcast because at the time, I didn't have a podcast. And well, now I'm looking to move forward, not backward. So we're just gonna start it from here on the podcast.
So, yes, I am documenting my own GEO journey in real time, one month at a time, so you can watch the strategy unfold and see the actual numbers as they move. No polishing it up after the fact. No waiting until it's a success story. You are along for the ride from the very beginning.
But before I dive into the results, let me give you a little recap on our case studies. I'll start with the why. Optimizing for AI search — also known as generative engine optimization — is a very new field. And when I say new, I mean we are all figuring this out in real time. There are no ten year case studies to pull from. There's no industry benchmarks that have stood the test of time. Nothing. Not only did I wanna create proof of concept for Clemelopy itself, but I wanted to be able to show the entire process almost like a makeover so that at the end, we can have proof of concept, proof points, pinpoint what worked, improve our tools and processes over time, and show how GEO is a long game without overnight results.
In addition to that, when I created Clemelopy's website, I intentionally did not optimize it for AI search so that you could watch and follow along side by side with me. I know what you're thinking — yeah, Clemelopy will have its own case study and of course it's gonna be great. But we have several case study participants we are also following. You've likely already seen or listened to Dana's baseline. Dana is a photographer located here in Jacksonville, Florida — Dana Goodson Photography in case you wanna look her up. You're also going to meet someone named Jill Stonier very soon, and then Suzie with Maid with Kindness Cleaning. I'm also beginning a case study on my other business, Jen Shannon Creative Atelier, but that case study is for a different purpose and with a different strategy that I'll go into at another time. So these are real businesses with real websites, and we are doing this together to see what results we can garner from it.
All of that being said, let me tell you a little bit about what we are measuring. First, we are measuring referral traffic from ChatGPT, Claude, Perplexity, Grok, and Gemini. As of now, there is no way to truly measure how much your business is being recommended within these AI searches. And the reason is because these AI search companies are their own businesses, and they own their own data and their own tools. Allowing an outside platform like Google Analytics to tap into that data would be a massive decision and one that none of them have made yet. So the only way to measure currently is to measure the traffic coming to your website from their website. That's what we are tracking for this first metric.
The next metric we are measuring is what we call Share of Model. This is where we define our business, our main competitors, a list of questions or statements that we believe someone could be using to search in these AI engines and where we would like our business to appear and be recommended. We then go to these AI sites and prompt them with the same questions or statements across all of them and track their responses, including who they are mentioning and recommending, how often, and in what position they show up. We also look at things like whether the AI is citing from your site, whether the mention is a recommendation, and how you stack up against competitors. The higher your share of model, the more AI is treating your brand as a trusted source.
We also use our Orchard Audit, which can be run on either a single page on your website or on your website as a whole. We score these sites and pages based on six categories of criteria, and they clearly pinpoint what is wrong and where those issues need to be fixed. And we use our Orchard Builder that analyzes your page and makes optimization recommendations based on our Orchard Ecosystem Framework. We take those optimization recommendations and implement what makes sense to implement and then measure our progress through analytics, Share of Model, and Orchard Audits.
Now you probably caught that we take those optimization recommendations and implement what makes sense. I want to give you an example of a recommendation that the Orchard Builder may give that wouldn't make sense to implement. Because Clemelopy is such a new business, we don't have a lot of reviews. One recommendation was to add more social proof in the form of case studies or reviews, testimonials, and so on. Well, we don't really have much for that, so we implemented what we could. But beyond that, we just have to keep working to grow our user base and continue allowing time for our case studies to show proof.
Each month, we check in, run the numbers, and show you exactly where things stand, what's improving, what's not, and what we're doing about it. We document all of it here on the podcast and over on our YouTube channel if you wanna watch along. The goal isn't to show you a perfect success story at the end of the twelve months. The goal is to show the work as it happens so you can apply the same thinking to your own business and show proof of concept.
But really quickly, before we get into the results for month one, I want to let you know that this podcast is brought to you by Clemelopy. If you've been wondering why your leads have slowed down even though your work hasn't changed, I want you to know there's actually a reason for that, and it's probably not what you think. I offer a website visibility audit and strategy consultation where I go through your site, figure out what's getting in the way of AI search engines understanding what you do, and then hand you a clear, prioritized plan to fix it, including a one hour strategy call. It starts with a free fifteen minute call, and it's not a sales call. Just a conversation to see if this consultation makes sense for you and where you are in your business. You can book your free fifteen minute call at Clemelopy.com/workwithus.
Alright. So let me set the stage. The first thing I did to start the case study was gather our stats. It's actually quite simple. We had no Share of Model, no AI referral traffic, and no citations. Our full site audit wasn't terrible, but it was an eighty two out of a hundred. I always recommend starting with the home page when it comes to optimization — and there really is no scientific reason for it other than it is my personal opinion that the home page is like your business' storefront but on the Internet. And to me, that's the most important thing for getting people to want to come inside.
I ran an audit on our homepage, and it was pretty terrible. It was seventy one out of a hundred. I then ran our Orchard Builder and used the results to implement its optimization recommendations and then waited to see at month one how those changes seem to have affected our measurements.
Here is where we get into the nitty gritty of what I actually went in and did. The first thing I did was add pillar pages for Clemelopy's core canonical pillars. I know that sounds like some big technical jargon, but basically these are the main topics Clemelopy represents expertise on. These give the site a topical architecture that AI can actually follow. Clemelopy's core canonical pillars are GEO, content strategy, search evolution, small business growth, brand authority, and AI visibility. So every piece of content that we create on our website needs to be able to roll up to one of these pillars. They not only anchor who we are and what we do, but they also show what we can provide expertise on. So I created a single page for each of these pillars on my site and created them so that the URL would be exactly those terms and then created the content for each of those pages.
The next thing I did was add schema markup across all pages. Schema is one of the top flags in the baseline audit and had come back as a failure, which means AI had no structured data to grab on to. Schema is basically a way of labeling your content so that AI and search engines can understand what something is, not just what it says. So I went in and fixed that across the board, including on the home page.
I also created a global FAQ page because this was also flagged in the baseline. FAQ pages matter a whole lot for GEO because AI is literally answering questions all day every day, and having a page that maps to that format gives it something to pull from.
Now here's where I circle back as to why the pillar pages were important for optimizing the home page. One of the major parts of optimization is what we call contextual links. These aren't your footer links. They're not your navigation links. These are your internal and external links that you link to within your actual page content. Internal links being the links between pages on your site and external being a separate domain that is not your content. So I added contextual internal links throughout the copy, pointed those to the new pillar pages, and that was it for the internal linking. I also added external links through a data carousel that actually links out to supporting studies, because citing sources signals credibility and authority to AI systems.
As I mentioned earlier, testimonials were flagged as a gap as well. I don't have a lot of them, and I'm in my first two months of business. It is a real constraint. I'm not going to manufacture results to check a box, and I think naming that is exactly what makes a case study worth following. You can see the real gaps alongside the real wins. This also is a good spot to mention that GEO isn't a one and done thing. As AI systems change and we begin to collect data through these case studies, we'll be able to pinpoint and pivot in our strategies, so checking a box doesn't mean it stays checked.
Alright. So I ran the reports for Share of Model, Orchard Audit site, and Orchard Audit page. This is a true apples to apples comparison, and here's the breakdown.
When we first launched the case study on February ninth, the home page was sitting at a grade C, seventy one out of a hundred. There were five things that failed, three warnings, and thirteen passed. And now at our one month check-in, the home page is at a grade A, eighty seven out of a hundred. There was only one failure remaining, four warnings, and sixteen passes. So our grade went up sixteen points. Our failures moved down five to one. We gained one warning, and we gained three additional passes.
Now let me walk you through what actually moved in the audit. We use six categories to score an audit: clarity, authority, structure, AI readability, media, and accessibility. Our authority went from twenty percent all the way up to eighty seven percent. That is a big jump, and it was driven by three things — adding schema, building out the pillar pages, and bringing in external citations and then linking to those citation sources. Those three things together told AI that this site has depth, it has structure, and it's connected to a broader body of knowledge.
So the authority went up from twenty percent all the way to eighty seven percent. I am so excited about that number. Next, the structure went from sixty five percent up to ninety percent. Clarity held at a hundred percent, which was great. The one failure that's still remaining is a form field that is missing a label, which is actually an accessibility fix, not a GEO fix. It's a quick thing to address — I just may have dropped the ball on remembering to do that. The remaining flags are things like limited proof of expertise and add more specific facts. Those are honest reflections of where the site still needs to grow, and you simply can't manufacture case study results or testimonials in one month.
Next is the full site audit results. The baseline was scored at eighty two out of a hundred, which was a grade A. It had thirty two failures, a hundred and eleven warnings, and three hundred and five checks passed. For our one month check-in, the full site audit score went to eighty five out of a hundred — still a grade A. There are twelve failures, down from thirty two. Ninety three warnings, down from a hundred and eleven. And three hundred and thirteen checks passed, up from three hundred and five.
Yes, the overall score only moved three points, but this is actually a pretty big deal because we really only optimized the home page, though adding the pillar pages and schema are also a big plus. The biggest takeaway here is that failures dropped from thirty two to twelve. That's twenty failures cleaned up this month.
Now Share of Model. I'm not gonna sugarcoat it. There was no movement there. Still zero. But again, this only measures based on the query set that you give the model and track. I don't expect movement right away. This is, of all of the things, the slowest moving metric in GEO, and really that's by design. It takes time to build enough authority and enough presence that models are trained on your name and your reputation. So it's a long game. And I will continue to report on it honestly every single month. When there's something to show, you will definitely be the first to know.
But I saved the best for last. Do you remember what the last metric was that we were tracking? It's analytics.
I am so excited about this part. From roughly November first through February ninth — that's about three months, the entire period from when I started building the site through the launch and then almost a month after launch — there was zero AI referral traffic. Zero. No Perplexity, no ChatGPT, no Claude, no Grok, no Gemini. Nothing.
In month one alone, from February ninth through March tenth, after making these optimizations — Perplexity accounted for twenty five sessions. Twenty one percent of all the traffic, which made it the number two traffic source for the entire month, right behind direct traffic. And ChatGPT showed up for the first time too, just once, but it showed up.
But let that sink in for a second. Three months of working towards building and building and building and finally going live January thirteenth, and then kinda just letting it ride for about a month and then starting the case study in February to get that baseline and then optimizing. And in one month alone, we go from zero AI traffic to twenty five visits from Perplexity. This is huge.
Perplexity is indexing the site. It's understanding what the site is about, and it is surfacing Clemelopy in answers to real queries that real people are asking. And that's twenty five people who clicked through from Perplexity to my website. That number doesn't fully represent just how many times I surfaced within Perplexity — just the number that clicked through. That, my friends, is the GEO signal. That is what we are working towards, and I could not be more thrilled.
The schema, pillar pages, the FAQ, the linkings, the citations — that is what made this possible. Now can I draw a direct line from one specific thing I did to those twenty five Perplexity referrals? Not yet. The timing is significant, though, and it is absolutely worth watching closely.
And here's one more thing I want to name. Direct traffic dropped from eighty eight percent of all traffic down to sixty four percent. And if you don't know what that means, it might sound like bad news, but it's not. It means the ecosystem that I'm building is starting to activate. Other channels are beginning to carry some of that weight, and that is exactly what we want.
So here's what I'm gonna be doing for month two. I'm going to finish trying to fully optimize the home page, including those labels that I forgot to take care of. And because I've added some things to Clemelopy, I'll likely also update the home page to include those things, including following along with the podcast and the case studies. I also analyzed my website traffic a little deeper, and I learned that aside from the home page, the next most visited page on the site was the twenty twenty six GEO playbook page followed by the case studies page. Of the hundred and twenty two sessions, sixty went direct to the home page, followed by fifteen to the playbook and seven to the case studies. So my next move will be to run Orchard Audits and Orchard Builders on those two pages and then use the results to optimize those pages accordingly.
And I wanna say one honest thing before I close out. Twenty five sessions from Perplexity in the first month genuinely surprised me. I expected progress. I did not expect it that fast. I thought maybe I'll have one from ChatGPT. I honestly wasn't even thinking that I would have any from Perplexity. And I think it's worth saying out loud instead of just burying it in a graph somewhere.
This is the whole point of following along. I'm not showing you a polished success story after the fact or some glamorous makeover. I'm showing you the real work that goes into it in real time, month by month with the actual numbers, including the gaps so that you can do the same thing for your business.
And that's it for our month one Clemelopy case study check-in. If you wanna follow along, make sure you are subscribed to The Canopy. Feel free to follow along over on our YouTube as well, and that will make sure that you don't miss the month two check-in. And if you want to see the actual audit screenshots and traffic data, head over to our YouTube playlist for Clemelopy Case Study, which will also be linked in the show notes. Thank you so much for being here, and as always, keep growing forward.
Topics Covered
Join Clemelopy Beta
Lock in founding member pricing and start optimizing your content for the future of AI-powered search.
Get Started — From $29/mo