Stanley Zong, a high school student from Palo Alto, was rejected by 16 universities, but was immediately hired by Google for a software engineering position that normally requires a doctorate or equivalent experience. I mean, it is just a completely absurd contrast. You have this teenager getting rejection letter after rejection letter from basically every college he applied to. And then the response to that single localized failure is the launch of this massive multi-state civil rights lawsuit that is written almost entirely by artificial intelligence. Which brings us to the core question here. How does a student who is deemed completely unqualified for a basic undergraduate degree use an AI chatbot to prove that they are the victim of systemic institutional bias? Right. And to really get into that, we have to start by looking at the objective reality of his background because all the friction in the story begins with the raw data of his academic profile compared to the actual outcome of his applications. Let's actually look at his resume because saying he was just a student, I mean, that doesn't even begin to cover it. No, not at all. The credentials are mathematically extraordinary. We're talking about an unweighted grade point average of 3.97 and a weighted GPA well over 4.0. Wow. He attended a highly competitive high school, placed in the top 9% of his class, and he achieved a near perfect SAT score. He literally missed the maximum score by exactly 10 points. Which is wild. Those academic metrics alone place him in this incredibly elite bracket. I mean, that test score puts him in the top fraction of 1% of millions of annual test takers. Exactly. And his class rank actually qualified him for a state-specific program that guarantees admission to at least one public university campus for high-performing residents. But, you know, plenty of students have high grades. Right. Grades aren't everything. Exactly. What elevates this beyond just a standard rejection complaint is the professional validation he built on his own time. Yeah, because beyond the classroom, he founded a free electronic signature startup called Rabbit Sign. And I want to be clear, this was not some minor side project or a high school coding club assignment. It was a fully functional, high-pay, compliant platform. We really should pause on what that actually means. Because building a high-pay compliant platform as a solo high school developer is a staggering technical achievement. Seriously. High-pay compliance means the software meets these incredibly stringent federal guidelines for protecting sensitive patient health information. You can't just throw some code together on a server in your bedroom. Right. You have to build end-to-end encryption. Yes. Encryption, secure key management, robust audit trails. You have to ensure that data is completely protected both in transit and at rest. Because if a medical clinic uses your platform to sign patient forms and there's a security breach, you're facing massive legal consequences. Exactly. And he built this to handle heavy user demand without charging any fees. And Amazon Web Services actually recognized this startup for its exceptional efficiency and secure architecture. They selected it to be featured in a professional case study. Which is just huge. For Amazon to look at a high schooler's architecture and say, yes, this is an example of how to build efficiently on our servers, that is incredible validation. It's the exact kind of real-world application of skill that usually commands immediate attention. And I mean, it did command attention, just not from the universities. Right. So the Google recruitment process is fascinating. A recruiter from the company actually reached out to him when he was in middle school. Middle school. That's insane. Yeah. The recruiter was completely unaware of his age. They just reached out based entirely on his online technical contributions and open source commits. Wow. Once they got him on a call and realized he was literally a minor, they paused the process. But they kept his file and right as he finished high school, they brought him in for the real thing. And the evaluation he endured is brutal. It was a 10-hour process involving five randomly selected Google engineers. 10 hours. Yeah. And these engineers are specifically trained to assess both technical capabilities and soft skills, completely blind to his background, his age, or any external influence. Let's explore what that 10-hour evaluation actually looks like. Yeah. Because it is not just answering basic questions about your resume. Oh no. A technical interview at that level involves multiple rounds of intense whiteboarding and problem solving. They are testing you on data structures, complex algorithms, graph theory, dynamic programming. Sounds exhausting. It is. They will present this massive abstract problem, something like designing the back end for a globally distributed caching system, or like a ride-sharing dispatch algorithm. Right. And the candidate has to write functional code on a white board while explaining their logic, optimizing for speed and memory usage, and accounting for all these weird edge cases. And after running that gauntlet, they offered him an L4 position. That specific level designation is crucial here. L3 is the standard entry-level role for a recent college graduate with a computer science degree. Okay. L4 is a mid-level software engineer. It's a position typically reserved for individuals holding a PhD or someone who possesses several years of high-level industry experience. I'm looking at these stats, and they just seem flawless. But are we sure they didn't just go easy on him because he was this young prodigy? Well, the structure of the process prevents that. The compensation structure at the company naturally disincentivizes interviewers from over-assessing a candidate's qualifications. Because if you hire someone at an L4 level and they cannot actually perform L4 work, it actively damages your team's productivity and your own performance metrics as an interviewer. Oh, I see. The evaluation was strictly tied to verifiable merit. The code works efficiently, or it doesn't. So he gets that, but then we have the university rejections. 16 highly selective engineering programs denied him admission. 16. This included five separate campuses within the University of California system, alongside MIT, Stanford, Carnegie Mellon, and Cornell. Out of 18 applications, he was only accepted to two state universities. So the corporate sector validated his abilities at the highest possible level, while the academic sector systematically excluded him. It is honestly like passing a grueling multi-day audition for a principal seat in a major symphony orchestra, playing flawlessly behind a blind screen, only to be told the next day that you are not qualified to take an introductory music theory class at the local community college. The contrast is just jarring. It really is. But I have to ask you, is this actually a failure of the system, or is it just the mathematical reality of elite schools? I mean, when an institution has thousands of applicants with perfect scores for only a few hundred spots, incredibly qualified people are going to be turned away. Well, that mathematical reality is exactly the defense these institutions rely upon. They argue that perfect metrics are merely a baseline, you know, not a guarantee of admission. They look at a pool of 5,000 applicants with four-pointer GPAs, and they just have to find a way to select 500. But the presence of the corporate offer changes the equation entirely. What this changes is that it shifts the burden of proof onto universities to articulate exactly what they are evaluating if verifiable world-class technical mastery is somehow insufficient for admission. Wait, back up. We need to look at the other side of this. We really cannot view this purely through the lens of a flawless underdog story. Okay, that's fair. Following his hiring, he underwent a performance review under the company's revamped evaluation system. Right. This new system was designed to be significantly more rigorous, actually cutting payouts for average employees to heavily reward top performers. And in that review, he received an outstanding impact rating. Which is huge. Yeah, it places performance above the majority of high-performing engineers at one of the most competitive technology firms globally. And the family uses that rating as definitive empirical proof of his merit. They're basically saying, look, not only did he pass the test, he's actively outperforming the adults in the room. But there is fierce criticism circulating on public forums like Reddit and Hacker News regarding his background. Critics point out a highly relevant detail. His father, Nan Zong, is already a software engineering manager at Google. Yeah, the critique centers heavily on the environment he grew up in. He attended gun high school in Palo Alto. Right. This is an environment known for its hyper competitive atmosphere and its proximity to immense wealth. Students in that district are surrounded by the children of venture capitalists, tech executives, and Stanford professors. And critics argue that in that specific demographic, having near perfect test scores, participating in coding competitions, and even founding a nonprofit are simply standard resume fillers. Wow, standard. Yeah, we are talking about an ecosystem where parents hire private tutors for middle schoolers to learn advanced machine learning. They suggest that despite his objective intelligence, he's essentially a dime, a dozen applicant in the context of Silicon Valley overachievers. Like in a vacuum, building rabbit sign is incredible. But when evaluated against his immediate peers in Palo Alto, critics argue he lacked a unique standout quality that a holistic admissions board looks for. There is also the reality of public university mandates that complicate the narrative of this purely vindictive rejection. What do you mean? Well, critics highlight that out of state public institutions, such as the University of Washington, are legally mandated to prioritize their in state residence. How does that actually function in the admissions office, though? A state university is funded by the taxpayers of that state, right? Their primary charter is to educate the students of that state. Okay, makes sense. So a place like the University of Washington might cap their out of state engineering admissions at something incredibly low, like maybe 10 or 15% of the total class. Oh, wow. Yeah, that severely limits the available spots for out of state applicants to a tiny fraction. Rejection from those specific programs is purely a matter of geographic capacity, regardless of an applicant's academic perfection. But a parent's job title does not negate a 10-hour blind technical interview conducted by five independent engineers. I mean, if the process is truly randomized and blind, the merit of the applicant stands on its own. They aren't asking his dad for the answers while he is whiteboarding a system design problem. See, I completely disagree with that framing. Really? Yeah, networking, proximity to Silicon Valley wealth, and having a parent who understands the exact internal mechanics of the corporate hiring process provide an incalculable advantage. You think the father coached him on the specific rubrics Google uses? Absolutely. The father is an engineering manager there. He knows exactly what the interviewers are trained to look for. That's true. He knows the specific phrasing they prefer for problem solving, the optimal ways to structure code for those specific tests, and the behavioral signals they value. Claiming that this is a pure meritocracy while completely ignoring those structural advantages makes the underdog narrative completely disingenuous, in my opinion. The corporate evaluation may be blind on the day of the test, but the preparation for it was heavily resourced over years. Okay, so the consequence of this is that it limits the pure meritocracy narrative presented by the family. It introduces the reality that privilege and insider access play a role in corporate hiring just as much as subjective criteria play a role in college admissions. Exactly. And because of these complex rejections and the lack of what they felt was a satisfactory response from university officials, the father and son decided to formalize their grievance. Right. They created a nonprofit advocacy group called S.O.R.D., which stands for students who oppose racial discrimination. Yeah, after attempting to engage directly with university administration and feeling entirely dismissed, they established S.O.R.D. to serve as a co-plaintiff and basically a vehicle to organize anonymous testimony from other families experiencing the same issues. They took their grievance and built a mechanism to sue universities across multiple states. And their multi-jurisdictional strategy is highly specific. They are not just filing lawsuits indiscriminately in federal courts everywhere. No. They are specifically targeting states that have pre-existing state laws explicitly banning race-based preferences in public education. Why focus on state law instead of federal? Because state constitutions and state level voter initiatives can offer stricter protections than federal law. In certain states, voters have passed propositions that completely outlaw any consideration of race in public university admissions. Right. By focusing on states with these established legal mandates, they are arguing that the institutions are actively circumventing the will of the voters and violating their own state constitutions through the use of qualitative proxies. A crucial part of this strategy relies on a novel concept the father developed, which he calls evergreen legal standing. We really should explain what standing actually means in a courtroom context. Yeah. In the legal system, you cannot just sue someone because you are angry or because you think they broke a rule. Obviously. You have to prove Article 3 standing, which requires showing that you suffered a concrete, particularized injury. In university admissions cases, the injury is the actual denial of admission. And because his son has declined to enroll in any degree-granting institution and remains employed in the corporate sector, he is legally classified as a potential student. Exactly. And that status prevents the universities from utilizing a common, very effective legal defense called mootness. Mootness. Yeah. Normally, if a student is rejected, attends a different college and then sues, the defending university can argue the case is moot because the student is already receiving a college education elsewhere. Oh, I see. The immediate harm has passed. The courts often agree and dismiss the case, which allows the universities to basically run up the clock on these lawsuits simply by waiting for the student to graduate from their backup school. It is exactly like a ghost haunting a house. As long as the student refuses to actually go to college, the legal threat can never truly be exercised or dismissed by the court. The grievance remains perfectly preserved indefinitely. Right. And what this opens up is a pathway for plaintiffs to maintain perpetual legal pressure on institutions without having to prove immediate ongoing enrollment harm. It completely changes the procedural playbook. Hold on. We have to talk about how they are actually fighting these legal battles because the mechanics of this lawsuit are wild. They really are. Because you would assume a multi-state federal litigation campaign would require a massive legal team. We are talking about suing multiple heavily funded state institutions simultaneously. But traditional law firms universally refuse to take the case. Yeah. Some cited overwhelming existing case loads, while others reportedly expressed deep concern over the political controversy. Taking on major universities over admissions policies is a massive lightning rod. Oh, for sure. Firms were worried about public backlash and even potential physical safety risks associated with challenging these specific institutional policies. So left without traditional representation, they had to proceed pro se, meaning they are representing themselves in court. Right. But the father, leaning heavily into his background as a software engineer, did not just start typing up documents in Microsoft Word. He utilized conversational, generative, artificial intelligence models, specifically chat GPT and Gemini, to draft the initial federal complaints. And these were not short summaries or basic template letters. The AI generated highly structured legal filings that exceeded hundreds of pages in length. Hundreds. The documents included deep constitutional analysis, jurisdictional comparisons, and specific formatting required by federal courts. The cost disparity here is just staggering. Sustaining complex civil rights litigation against state entities usually requires massive financial retainers. I mean, hundreds of thousands of dollars just to get through the discovery phase. Yeah. The father described securing what he calls a team of deep lawyers available around the clock for a nominal monthly fee of roughly $20. $20. And the capability of the software was proven during a specific procedural clash, a defending university objected to the scope of a litigation hold notice. For those who haven't been through corporate litigation, a litigation hold is basically a legal freeze ray. That's a good way to put it. Yeah. When a lawsuit is filed, you send this notice to the opposing party and it legally forces the university to stop deleting any emails, internal Slack messages, server logs, or admission files that might be relevant to the case. It entirely freezes their data retention policies. The university's legal team pushed back, likely arguing the hold was too broad or overly burdensome. So the father fed their legal objection into the AI. He prompted it to analyze the objection based on federal rules of civil procedure. The AI drafted a highly technical, legally rigorous response, citing precedents on electronic discovery. And that AI generated response was so effective that it successfully forced the institution to back down and fully comply with the document retention requests. I mean, imagine fighting a heavily armored Goliath. Yeah, the university legal teams are backed by massive multi-billion dollar endowments, and they use top tier outside counsel like Wilmer Hale. The plaintiff is fighting them using a slingshot made of predictive text. The consequence of this is that it democratizes high stakes legal advocacy. It dismantles the financial barrier to entry and directly threatens the institutional advantage previously held by well funded university legal departments. It really forces us to ask whether the artificial intelligence is actually generating sound innovative legal strategy, or if it is simply overwhelming the judicial system with highly articulate, perfectly formatted paperwork that the courts and opposing counsel are forced to spend time and mental processing. But relying on this technology in a federal courtroom carries severe documented risks. The broader legal community is currently struggling to manage a wave of AI malpractice. There are global examples of this technology feeling disastrously in legal settings. In China, a judge caught lawyers citing completely fabricated cases. Oh no. Yeah, the AI had simply invented rulings and assigned them sequential patterned case numbers. And in New York and California, federal judges have issued formal sanctions and steep financial penalties against lawyers who submitted briefs relying on legal decisions that literally do not exist. These events are known as hallucinations, where the model generates false information presented with absolute confidence. We need to explore how an AI actually hallucinates, because it isn't a search engine looking up files in a database. Exactly. Large language models generate text by predicting the most mathematically probable next token or piece of a word based on the massive data set they were trained on. Right. If you ask it for a legal citation, it doesn't search a law library. It predicts what a legal citation should look like. It knows that citations often have a volume number, a reporter abbreviation, a page number, and a year. Sure. So it strings those tokens together perfectly, creating a citation that looks incredibly convincing, but points to absolutely nothing. The father claims to avoid these pitfalls through a very specific prompting methodology. Instead of asking the models to find obscure case citations, which triggers that predictive hallucination, he uses multiple different models to cross verify every piece of information. That's smart. Yeah. If chatGPT generates a legal theory, he feeds it into Gemini and asks it to find logical flaws. He focuses the AI prompts entirely on constructing the logical architecture of constitutional arguments under the 14th Amendment and Title VI. The 14th Amendment guarantees equal protection under the law. And Title VI prohibits discrimination on the basis of race, color, and national origin in programs and activities receiving federal financial assistance. Right. He is using the AI to build the framework of how the universities are allegedly violating these specific statutes rather than asking it to fetch case law. Furthermore, he is employing a fascinating tactical delay. He is deliberately withholding the service of legal papers to certain institutions for as long as procedurally allowed under the statute of limitations. His stated reason is to grant the AI models more time to receive updates and grow in their capabilities before advancing the case. He knows that the models available six months from now will be vastly superior to the ones available today. See, delaying a federal civil rights lawsuit simply so your software subscription can receive an update makes a complete mockery of the judicial system. You think so? Yes. It treats the federal courts like a beta testing environment for consumer technology. The courts are designed to resolve actual immediate disputes, not to wait around for a tech company to release a new language model. I pushed back on that completely. I think it is a brilliant, highly rational utilization of rapidly advancing technology. Really? Yeah. If your opponent has unlimited financial resources and an army of paralegals, and your primary weapon improves exponentially every few weeks through software updates, stalling for an upgrade is the most effective legal strategy available. Why fight today with version 3.5 when you can fight tomorrow with version 4.0? I mean, I see your point. What this limits is the effectiveness of traditional legal defense strategies. Universities are no longer just fighting a human's intellect or a static legal team. They are fighting an evolving algorithm that gets smarter, faster, and more articulate every single week. To understand the core allegations drafted by this software, we really have to look at how universities allegedly achieve their demographic targets through unstated methods. Now, as we get into these claims about admissions policies, we want to be totally clear with you, the listener. We are neutrally reporting the claims made in the lawsuit and the provided source material regarding politically charged topics like race and admissions. Yes. We are not endorsing or taking a side on these viewpoints, but merely detailing the exact arguments presented in the litigation. The lawsuit heavily details the concept of the shadow quota. The plaintiffs highlight that several university campuses have a stated official objective to become a Hispanic serving institution. And achieving this specific designation requires a university to reach a strict enrollment threshold. Specifically, 25% of their full-time equivalent undergraduate students must identify as Hispanic. Reaching this threshold unlocks significant federal funding. We are talking about millions of dollars in federal grants designed to expand educational opportunities and improve the academic attainment of Hispanic students. The legal argument presented by the plaintiffs is that setting a specific numeric demographic enrollment target of 25% is fundamentally incompatible with a strict race-neutral admissions mandate. Right. If a state law requires you to be entirely blind to demographics, but federal funding requires you to hit a 25% target, you have a massive conflict. The plaintiffs allege this creates a massive financial incentive for admissions officers to utilize qualitative subjective proxies to select for specific identities, ensuring the university hits the necessary threshold for the funding. They support this by citing a state auditor report. This report investigated admissions practices and found that certain highly selective canvases had systematically admitted less qualified applicants over more qualified ones. Really? Yeah, the auditor noted these decisions were often driven by personal connections or institutional priorities that superseded objective academic evaluation. Furthermore, the lawsuit cites public statements made by a prominent law school dean. According to the complaint, this dean allegedly discussed methods for achieving demographic diversity using subjective criteria that cannot be explicitly documented as constitutional violations. Wow. The plaintiffs characterize these methods as deliberate workarounds designed to evade state laws while still engineering the demographics of the incoming class. But how can a university mathematically balance the demographics of an incoming class without using some form of a thumb on the scale? It is exactly like trying to bake a perfectly balanced cake while being legally prohibited from using measuring cups. Yeah. You know you need a specific ratio of flour to sugar to make the cake rise properly, but you are legally blindfolded when adding the ingredients. You are forced to guess, adjust subjectively, and rely on instinct, which leaves a trail of inconsistent results that plaintiffs can point to as evidence of manipulation. The consequence of this exposes the intense friction between a university's desire to secure federal diversity funding and their legal obligation to maintain strictly race neutral admissions. It opens them up to claims of systemic deception, where plaintiffs argue the holistic review is just a smoke screen for hidden quotas. This entire conflict ultimately reveals a fundamental clash between two completely different philosophies of selection and merit. Hooray. On one side, we have the corporate merit model, which we can call the Google Standard. This model relies entirely on objective performance validation, technical testing, and direct assessment of output. Think about it like testing a car's engine output on a dynamometer. The dynamometer doesn't care what color the car is, where it was built, or who the previous owner was. It only measures raw horsepower and torque. In the corporate environment, the financial and operational cost of a bad hire is incredibly high. If an engineer ships broken code, servers crash, and millions of dollars are lost. Therefore, the organization rigorously prioritizes verifiable capability over educational pedigree or demographic background. And on the other side is the holistic academic model, the university standard. This philosophy views the selection process not as a reward for past excellence, but as a curation process aimed at creating a participatory society of minds. The history of holistic admissions actually goes back to the 1920s in the Ivy League. Originally, admission was based strictly on entrance exams. But when institutions noticed demographic shifts in who was passing those exams, they introduced qualitative measures, character assessments, personal essays, interviews to basically regain control of the social composition of their student bodies. Under the modern holistic model, quantitative excellence, you know, the 4.0 GPA and the perfect SAT is just a baseline prerequisite. It gets your application read. The final selection relies heavily on subjective assessments of character, life experience, and potential social contribution to the campus environment. It is less about testing the engine on a dynamometer and more about judging how the car fits aesthetically into a curated showroom. College admission is not a prize given to the person with the highest test score. It's the deliberate construction of diverse community. The fact that he was rejected by the university but hired by the corporation honestly proves the system works exactly as intended. He really thinks so. Well, he possesses pure technical skill, which the corporation needs to build products, but perhaps lacked the varied life experiences that a university curates for its community. I mean, treating university admission as a subjective social engineering project destroys national technical competitiveness. It's a strong statement. It's true, though. It punishes objective, verifiable excellence by moving the goalposts based on opaque criteria. If a student can perform at a mid-level corporate tier, writing code that is evaluated by blind industry experts, rejecting them for an undergraduate degree based on subjective holistic fit is a massive institutional failure. What this changes is that it forces you, the listener, to confront what a university degree actually represents. Is it a certification of absolute intellectual capability, or is it a subjective marker of social and institutional fit? This story is the ultimate collision of algorithmic validation, the democratization of legal tools, and the opaque nature of elite institutions. And if artificial intelligence is now capable of performing the job of a top tier corporate lawyer drafting hundreds of pages of constitutional analysis for $20 a month, how long before it can perform the job of the software engineer whose merit is at the center of this entire debate? If you're not subscribed yet, take a second and hit follow on whatever app you're using. It helps us keep making this. We appreciate you being here.