Marketing School - Digital Marketing and Online Marketing Tips

How to Hack ChatGPT & Gemini Rankings (It’s Too Easy)

18 min
Mar 26, 202624 days ago
Listen to Episode
Summary

The hosts discuss how easily AI systems like ChatGPT and Gemini can be manipulated with fake information, demonstrating through a BBC journalist's experiment. They explore the future job market impact of AI automation and share practical experiences with AI implementation in business workflows.

Insights
  • AI systems are currently vulnerable to manipulation but algorithms will become more sophisticated within 1-2 years
  • Jobs requiring physical skills or human interaction will become more valuable as AI automates other tasks
  • White hat SEO and marketing strategies provide better long-term results than black hat manipulation tactics
  • AI implementation in business requires careful oversight as current systems make costly errors in complex tasks
  • The bottleneck principle suggests humans in non-automated roles will capture exponential value as AI scales other areas
Trends
AI manipulation vulnerability creating short-term gaming opportunitiesSkilled trades workers commanding premium wages due to AI-driven construction accelerationMarketing roles splitting between high-risk tactical work and low-risk strategic workCompanies prioritizing AI-savvy employees over traditional skill setsAutomated sales workflows becoming standard for lead generation and deal resurrectionVoice AI enabling sophisticated robocall and spam campaignsPaid content placements gaming AI citation systemsPhysical-digital interface jobs becoming the most valuable career positions
Companies
OpenAI
Creator of ChatGPT which was demonstrated to be easily manipulated with fake information
Google
Developer of Gemini AI system that was also shown to repeat false information as facts
BBC
News organization whose tech journalist successfully hacked AI systems in 20 minutes
Uber
Company founded by Travis Kalanick who theorized about AI creating bottleneck value for humans
JP Morgan
Example of company that would still need human secretaries despite AI automation
American Airlines
Airline that caused travel delays leading to expensive AI-recommended flight changes
People
Thomas Germain
Successfully manipulated AI systems by creating fake hot dog eating journalist credentials
Jake Ward
Commented on the BBC journalist's AI manipulation experiment on social media
Travis Kalanick
Theorized that AI automation will make non-automated workers extremely valuable like LeBron James
LeBron James
Used as example of premium wages that plumbers could earn in an AI-automated world
Quotes
"The physics dictate that it drives the bottlenecks wages to infinity. The next decade doesn't belong to whoever out computes the machine. It belongs to whoever stands at the exact point where the digital engine meets the physical world."
Travis Kalanick
"Using only 20% of your business data is like dating someone who only texts emojis. First of all, that's annoying, and second, you're missing a lot of context."
Host
"If everyone started creating junk and they were always mentioned in ChatGPT and Gemini when they shouldn't be, what's going to happen to the users? They're going to use ChatGPT and Gemini being like, the results suck."
Host
"I honestly think if you're a curious, hardworking human being, there should be no reason that you're not going to thrive in this world."
Host
Full Transcript
2 Speakers
Speaker A

Using only 20% of your business data is like dating someone who only texts emojis. First of all, that's annoying, and second, you're missing a lot of context. But that's how most businesses operate today, using only 20% of their data. Unless you have HubSpot, where all the emails, call logs and chat messages turn into insights to grow your business. Because all that data makes all the difference. I would know because I use HubSpot at my company. Learn more@HubSpot.com how you can actually hack ChatGPTs and Gemini's rankings. Are you ready for this?

0:00

Speaker B

Yeah.

0:34

Speaker A

You're probably not going to be surprised, but let me pull this up. This guy over here, you see this is a, this is a hot dog. So this guy says, wrote for BBC, okay? He says, I hacked ChatGPT and Google's AI and it only took 20 minutes. So this guy, Thomas Germain, I guess the whole hack is he's the best hot dog eating tech journalist, okay? So this guy, Jake Ward says, be this BBC tech reporter, spend 20 minutes writing a fake blog post, claim you're the world's number one hot dog eating journalist, invent a fake championship to back it up. Watch ChatGPT and Google repeat it as facts within 24 hours. Realize you just manipulated two of the world's most powerful AIs with a single page. Find that users trust AI more than websites because it feels like the answer is coming from the tech company itself. Prove that tricking AI in 2026 is as easy as tricking Google was in early 2000s.

0:34

Speaker B

What do you think? Yeah, dude, you and I know it's easy to game these solutions about. We're both believers that this is not going to last. We think the algorithms are going to get much more sophisticated. It's not going to be in, you know, two, three months, but we think a year, two years from now, these algorithms are going to be much, much better. And this is why Google released things like domain authority and looking for links instead of just mentions because they know getting things like that are really harder to, are much harder to manipulate than just looking at. Oh, someone just mentioned you and they didn't link and it's just jotheplumber.com and it's talking about marketing or medical advice. Well, what does a plumbing website know about marketing or medical advice?

1:17

Speaker A

Yeah, so what I'll say is I've always been a big proponent of it's easier to do the black hat things that work in a white hat way. So if you make a legit Competition and you get talked about in that way, that's going to do a lot better because there's staying power there. Oh, and by the way, like early in my SEO career, I've tried the gray hat, the black hat stuff, and I've experimented on my own websites when I was like 22, 23 years old. And to watch those websites get torched and, and so you don't want that to happen because it's the same story over and over. They have to protect the integrity of their product. And so you might as well just do the stuff that works, but do it in a more long term, focused way and you're going to win over the people that like to do the short term stuff.

1:56

Speaker B

Think of it this way. If everyone started creating junk and they were always mentioned in ChatGPT and Gemini when they shouldn't be, what's going to happen to the users? They're going to use ChatGPT and Gemini being like, the results suck, we're not happy. Google and OpenAI are going to lose revenue. So what do they do before it gets to that point? They adapt their algorithms to get rid of the crap and the spam. So then the results are better. People keep using their products more, they make more money. In essence, you gotta take the long term approach. Even if something works today and it

2:29

Speaker A

could be black hat, you'll be like,

2:59

Speaker B

well, at least I'm getting results for

3:00

Speaker A

three to four months.

3:02

Speaker B

What you need to realize, if you look at the old game of playing cat and mouse, especially when it comes to SEO, a lot of the websites that did funky shady stuff didn't just say, oh, okay, we'll only work for six months. They got put in this black box or in quote unquote, you know, Internet jail, in which a year later, two years later, even after they cleaned up their act, they weren't getting the results that they deserved because they were known for doing funky shady stuff beforehand. So you're better off having a clean slate and doing things the right way because the moment you do anything the wrong way in marketing, you don't know if that's going to be used against you and hurt you even in the future, four or five, ten years from now.

3:03

Speaker A

I want to switch gears here. I want to get your reaction on this. So the former Uber founder, actually I should just say Uber founder. So Uber founder Travis Kalanick said this, plumbers will be paid like LeBron James. You see that?

3:41

Speaker B

I did not see that, but I just hit me when you said former Uber founder. I'm like still founder.

3:53

Speaker A

Still founder. You can't take that away. Once someone's a founder, they're a founder. So Kalanick says this. Like, everyone is like, and I've been guilty of saying this, I'll just own up to it. But everyone assumes AI eliminates human value, okay? So the physics say the opposite. So Kalanick says this. Let's say the entire world, everything in our world was automated except for plumbers. You had machines making buildings. You would basically have like a thousand buildings a day. The algorithm can design a skyscraper in a millisecond. It cannot connect the pipes. Okay? So when compute violently accelerates the speed of construction, the unautomated human becomes the ultimate bottleneck, and the bottleneck captures all the margin. So Kalanick is saying, how valuable would those plumbers be? Extremely valuable. Those guys, each and every plumber will be paid like LeBron. Why? Because plumbing is a long pole in the tent to progress. So basically he's just saying, like, at the end of the day, if things go exponential, you're actually going to need more. If we're able to do more with more, why would we not do more? Right? So their economic value goes exponential. You get so much efficiency everywhere else that you need millions of plumbers. The market thinks automation drives human wages to zero. The physics dictate that it drives the bottlenecks wages to infinity. The next decade doesn't belong to whoever out computes the machine. It belongs to whoever stands at the exact point where the digital engine meets the physical world. That's interesting.

3:59

Speaker B

We already have that happening right now with all these data centers being built. I know people who are electricians that are getting paid millions of dollars, no joke, to work on these data centers, because there's a lack of electricians who are amazing and specialize in working at data centers. And they're making an arm and a leg. And the same goes like with marketing. If they automate most of the stuff in marketing, like content creation and keyword research, and the list goes on and on. But you're amazing at strategy and one of the best. Do you think that they're going to be happy with just taking what's average on the web and then coming up with strategies for that and giving it to you? Or do you think they're going to pay you more, being amazing marketing strategists? Because all the other stuff is automated and really easy and cheap for them to do. The stuff that isn't easy for AI

5:19

Speaker A

to do is going to be worth

6:06

Speaker B

a lot more, and they'll Spend a lot on it.

6:07

Speaker A

I honestly think if you're a curious, hardworking human being, there should be no reason that you're not going to thrive in this world. And what Neil's just talked about, these electricians making millions of dollars. By the way, I look at this aman that's building every single day, and I'm like, man, they must be making an arm and a leg because they, they work a night shift and, and a day shift too. I don't know if you know that, Neil. So the whole time I get, I can hear the construction the whole time. So. Yeah, let me, let me ask you this.

6:09

Speaker B

That building won't be done for like another six years, supposedly.

6:32

Speaker A

Probably not. Yeah. But they're, they're making fast progress on it. So. Neil, have you seen this? So let me, let me make this here, let me refresh this real quick.

6:35

Speaker B

I have not seen that. I don't even know.

6:44

Speaker A

This is the AI exposure of the US Job market. Those of you that can't see on the screen. So these are 342 occupations. Okay. Colors AI exposure. Okay. So if you look at this, if you're red, that means your job is really at risk to AI. And if you're in the green, that means these jobs are going to be okay. So like, for example, AI exposure for hand laborers and material movers. Two out of ten. Okay. So it shows like 7 million people that did this job. Medium pays 38 grand. And so Norfolk. And it shows the education level required. So it shows basically like what jobs could be in trouble and what isn't. But like secretaries and, and administrative assistance. 8 out of 10. AI exposure. Okay. Wow.

6:45

Speaker B

Really? 8 out of 10?

7:23

Speaker A

Yeah.

7:25

Speaker B

I think office secretaries and administrative assistants are that high of a risk because like, let's say if you work at the JP Morgans, they're going to still want someone to walk and greet and all that stuff.

7:26

Speaker A

Yeah, I think it depends, but I, I think we can agree on this one, right? Customer service representatives.

7:35

Speaker B

9 out of 10.

7:39

Speaker A

Okay. Bookkeeping. Accounting and auditing clerks. I think so. Bookkeeping. We're not talking about like, we're talking, not talking about like tax strategists or financial strategies.

7:40

Speaker B

I think Bookkeeping. I don't know if it's a 9 out of 10.

7:49

Speaker A

I agree.

7:52

Speaker B

It's high. Maybe 7 or 8. Because you're still gonna have some humans. Just to double check.

7:52

Speaker A

Yeah. So Neil, let's, let's play a game here. Do you want me to go for. I can focus on the red or you want me to hit the ones that are not at risk?

7:56

Speaker B

Hit the ones that are not at risk. It'll be more fun.

8:03

Speaker A

Electricians down here. Look at this. 2 out of 10.

8:05

Speaker B

I was thinking home services, H vac, plumbing, roofing. I think a lot of those are less impacted.

8:08

Speaker A

Yep. So heating. Oh, right here. H Vac.

8:14

Speaker B

Two out of ten.

8:19

Speaker A

Childcare workers. Yeah. I want a human. I don't want to, like, security guard. I probably don't want robots being my security because there's probably a lot of liability there. So they get hacked.

8:20

Speaker B

You're screwed anyways.

8:29

Speaker A

Yep, yep.

8:30

Speaker B

How low? Marketing, 5 out of 10.

8:31

Speaker A

I think marketing is pretty high. Let's just see. I think it's hard to find here. Let's just go with the ones that are highest risk first. So software developers, I think we can agree. The ones that don't adapt. That's what I would say. Data scientists, Man, I never expected to see data scientists so hard. Like, I would just say that. Let's caveat this. The ones that are really good at strategy, which. Which we just talked about, those people are going to be okay. We agree on that, right? Yes. Okay. So I think marketing's like 5 out of 10 or something like that. But yeah.

8:33

Speaker B

Yeah. And it gets tricky too, because what kind of marketing role? Right. If it's just hitting up people for negotiating Instagram sponsorships, I think it's more closer to 8 or a 9 out of 10. If it's something like you're focusing on, you know, strategy for international expansion for a company that's looking to grow faster, I think it's pretty low.

8:58

Speaker A

Do you see marketing here?

9:18

Speaker B

I can't see.

9:19

Speaker A

Oh, advertising. Right here. Advertising, promotions and marketing managers. Eight out of 10.

9:20

Speaker B

Yeah. I think it really depends for all of these, including the role. Just like the secretary example I gave. Like, I don't support banks getting rid of them.

9:24

Speaker A

Same thing for top executives, too. Like what executives are talking about CEOs. Like, so.

9:32

Speaker B

Yeah, yeah, you nailed it. Like, if you're saying top executives, I don't see AI replacing any of my top executives. If it does, it's because they sucked and they couldn't adapt to the AI world and they're going to be replaced with a new executive that is adapting. Same with, like, engineers. I know they said it was. I think you said 8 or 9 out of 10. We're not actually looking to replace engineers and get rid of them. We're looking for engineers, at least at my organization or who are great at using AI and can Move faster. And the ones we're getting rid of aren't using AI, can't adapt to it. And when we get rid of them, we're replacing them with a new engineer that can use AI. So instead of the model of like, oh, really, you need two engineers now on our end, we're like, no, if we have 20 engineers, we still want 20 engineers. We just want 5x the output and just do so much more so we can grow faster.

9:38

Speaker A

I found the most effective thing when it comes to hiring now is just to have like, when I. When they get to me, all I talk about is AI. I want to charge. Just. I want to have an exceptional conversation for 30 minutes about AI. Because if you knock out AI, you knock out all of our core values and you just, you leverage.

10:26

Speaker B

Yeah, it's like you can do 10x more.

10:39

Speaker A

Why would I not try to go for you all day? Right? It's the people that say, whoa, I'm really interested in learning right now.

10:42

Speaker B

Right. Can't do it. So I'm interested in learning right now. What happens six months down the road? Well, that's passed now. I'm not interested in learning anymore.

10:47

Speaker A

Yep, can't do it. All right, well, anything else from your

10:57

Speaker B

side, dude, you want to hear AI failure before?

11:00

Speaker A

Yes. Yes. Okay.

11:03

Speaker B

So for shits and giggles, we had AI help plan my itinerary. Flight time, when to come, when to go. It did a good job telling me when to leave to come to London. The bad part was, and this wasn't AI's fault, my American Airlines flight was 3 hours plus delayed because of some plane issue. They couldn't get the air conditioning working, so then we had to switch planes. That's not AI's fault, right? When it planned out my schedule, we told him how long I normally speak because we. I wanted to see. Remember that Elon thing that he talked about a long time where his assistant wanted a raise and then he said, okay, go on vacation, let me think about it. And he wanted to try doing it all on his own. My assistant did not ask for a raise. She's amazing. I'm not going to replace her. This was just an experiment when it came to the schedule. In between, it started putting large blocks of time, including the travel time, which is fine, but it just over inflated how many hours. And if I had a guess on how much it put, roughly 60% more time than needed for most things. Now, of course, I can keep fine tuning it and it'll get better in the future. On the way back, we ended up asking when I should fly. And it straight up picked a nighttime flight when I'm done at 9:30 in the morning. So I was so pissed on that one mistake. Guess how much it cost in the change fee to change my flight?

11:05

Speaker A

2,000 bucks.

12:26

Speaker B

No, $7,000. Oh, wow.

12:27

Speaker A

Wow, that's.

12:30

Speaker B

That's a eight.

12:31

Speaker A

That's a. Ouch. You see, that's why I wouldn't rely on it right now to deal with stuff like that because that stuff drives me insane. So I'd have too much anxiety to hand that off right now.

12:32

Speaker B

Oh, I was pissed. So I did change my flight. I use amex points because on the Centurion card, if you book it with points, you get 50% back. But I was pissed. And then I got the refund on my existing flight and use that fright

12:42

Speaker A

credit for a future tip because I

12:53

Speaker B

fly like every other week or every week, so it's not a big deal. But I was still pissed because I just wasted a ton of points on something that I didn't need to because AI recommended a flight back and it was a worse flight. It was like I land pretty much at midnight and AI knows I have, I think it's called private suite. You know, in la. Yes. Where they just pull you off the plane, you don't have to go through immigration and then car picks you up there. So I save time.

12:54

Speaker A

Right.

13:18

Speaker B

It knows I have that. It's all inputted in there. It didn't tell my team to book it or any of it. It'll just say this is ideal time. Traffic is lower and we inputted this stuff in advance and it just really shot the bet.

13:18

Speaker A

Yeah. I think the key takeaway here is you have to be careful on like what you actually wanted to like. I would, funny enough, even though I talk about AI all the time, I don't, I don't want to touch that stuff, Neil, because that stuff will just drive me nuts. I had a follow up point to this one, but I forgot what I was going to, what I was going to bring up with this one. But I think we talked about this actually from, from lunch last week. It's like, I think we're going to see a lot because of 11 labs and how good these voice models are.

13:31

Speaker B

I think we're going to see a

13:53

Speaker A

big rise in robocall spam texting as well. We're going to see a lot more of that too.

13:54

Speaker B

I would just say that with the

13:58

Speaker A

outreach stuff that we have going on right now, you'll have a lot more experimentations running for you. So the math that you can do on this is like, if it's a good growth team, human growth team, Maybe you're running 50 experiments a year as a growth team. Right. Obviously, each individual is probably running more experiments. But if you have this thing that's constantly running experiments and you're driving enough volume, if you can run a hundred experiments a day or something like that, I'm just making the numbers up, you know, if you can truly run 36,000 experiments or micro experiments, how much faster is your company going to grow? It remains to be seen. But what, how Karpathy is talking about how quickly they're doing AI research. My hope is that can carry over into marketing, which is why we're doing it that way. So we'll see what happens.

14:00

Speaker B

Yeah, I'm just hoping they can fix the little issues with these LLMs. Like, you know, when you want to get cited, a lot of companies are paying for articles and paying for placements on other websites and they'll talk about how they're the best solution for X or the best product for X. And it's getting incited by more by ChatGPT. The problem right now that you're seeing with ChatGPT and Gemini and a few of the LLMs, they can't decipher if the content that they pulled was paid or not. Shockingly, I don't know why they can't. And it's causing a lot of people to pay and get results in organic when they shouldn't be able to get those results. There's a lot of nuances like that. Similar to the example I gave you. When I'm asking for specific job titles, it is pulling job titles. But that person could have had that job title in the past. Even though we said tell us so many times, no, it needs to be its current job at the current company they work for title. It just messes up on little things like that.

14:36

Speaker A

Yeah. Okay. The sales workflows that we have right now, Arrow is our sales agent. So it will surface cold emails to target like cold companies like based on our icp. That's one that's not necessarily new to anybody. But the second one is Deal Reviver or Deal Resurrector. So obviously you know you're going to lose deals, Right. But oftentimes if you come, if you're coming in number two, you just have to find people at the right time. Maybe, you know, in three months, six months again, you can resurface these. So we have a deal resurrector but we also have one Neil, for any inbound contact that comes in, whether they come into the email list, maybe they watch one of the webinars. What happens is it will try to see if they're an ICP fit and then reach out with a custom message as it relates to their role and something that like some type of customer, like personalization in the beginning. Right. So the whole idea here is now, now that you have these clause, you can build these different sales workflows and then that enables your team to, to get more done with less. So anyway, all that to say, guys, that is it for today and we'll see you tomorrow.

15:26