Before we face the day, we like to recall 10 things we wouldn’t mind being remembered for having said—truths that feel like anchors in a world built on drift.
Reflect on these maxims, discuss them with your bot, use them as prompts for further exploration.
• If your policy fits neatly into a chatbot window, it probably doesn’t fit the world. • Not all intelligence is insight. Not all simplicity is wisdom. • A good prompt does not make a good President. • Obedience is not the same as discernment. • When you outsource judgment to a petbot, don’t be surprised when the world bites back. • If you can copy-paste your foreign policy from a chatbot, you’ve already surrendered your compass. • Chatbots reflect the room—they don’t read the room. • Simplicity isn’t neutral. It’s often just what power wants to hear in fewer words. • Just because the math is clean doesn’t mean the conscience is. • When leadership becomes a prompt, governance becomes a hallucination.
Bakerloo: What the hell are they going on about in this paper?
Alex: Imagine you run a big photo-sorting service.
Millions of pictures arrive every day: cats, cars, trees, people, food. You want a computer system that can look at each image and say, “That’s a cat,” or “That’s a bicycle,” without a human labeling them.
Around 2015–2016, this problem—getting computers to recognize what’s in images—was a major frontier. Lots of groups were building “deep learning” systems: huge stacks of simple pattern-recognizing steps that, together, become very good at spotting shapes, textures, and objects.
Two designs were leading the race:
One family called Inception (also known as GoogLeNet), which looked at each picture in several ways at once—small details, bigger patterns, even bigger patterns—and then combined those views.
Another family called ResNet, which added a simple trick: instead of forcing each layer to invent something totally new, it let each layer say, “Take what we already know, plus this small correction.” That small change made very deep systems much easier to train.
The paper you’re reading asks a straightforward question:
“What if we combine these two ideas—Inception’s clever multi-view blocks and ResNet’s ‘small corrections’ trick? Do we get a system that learns faster and performs better?”
The authors build several versions of this hybrid and compare them to a cleaned-up “pure” Inception design on a very big, standard test called ImageNet (a huge collection of labeled photos used as a benchmark).
They find three main things:
When they add the ResNet-style “small corrections” on top of Inception blocks, the system learns much faster. It reaches good performance in fewer training steps.
The final accuracy is a bit better, but not dramatically—this is evolution, not magic. The value is mostly in training speed and stability.
They hit a practical problem: if the “correction” part is too strong, the whole system can become unstable and stop learning. The fix is surprisingly simple: turn the correction down—scale it by a small factor before adding it. That’s enough to keep things calm while still getting the benefit.
Finally, when they combine several of these models and average their opinions, they reach record-breaking performance on the benchmark of the time.
So, for a person who knows nothing about the application domain, the paper is about this:
How to design a large, complex pattern-recognition system so that:
It can be very deep and powerful,
It still learns reliably, and
It uses its “improvement voice” gently enough not to crash itself.
In the larger story of AI, this paper is one step in a long engineering effort: figuring out how to stack lots of simple parts into a big system that sees the world more clearly—without falling apart under its own complexity.
Bakerloo: Can you give me more specifics, and perhaps a glossary?
Alex: Sure.
Here’s an Egghead → English™ pass on the paper — not a line-by-line translation, but a faithful, plain-language walkthrough of what it’s actually saying and doing.
1. What problem are they trying to solve?
Around 2015–2016, two big families of image-recognition networks were hot:
Inception (a.k.a. GoogLeNet) – clever, highly engineered building blocks with multiple branches at each layer.
ResNet – simpler blocks, but with skip connections (“residual connections”) that help very deep networks learn without getting stuck.
Both did great on ImageNet, but in different ways.
The paper asks:
“What if we take the fancy ‘Inception’ blocks and wire them together using ResNet-style skip connections? Do we get the best of both worlds?”
And behind that:
Does adding residual connections to Inception make training faster?
Does it improve final accuracy?
Are there any stability problems?
Can we design a “clean” new Inception (v4) in the process?
2. Quick refresher: What are Inception and ResNet in human terms?
Inception, in plain language
In a normal CNN, each layer is like one type of filter applied uniformly.
In an Inception block, each layer is more like:
“Let’s look at the image with several kinds of filters in parallel — small, medium, large — plus a shortcut path. Then we stitch all those results together.”
So an Inception block:
Has multiple parallel paths (1×1, 3×3, 5×5-ish convolutions).
Uses tricks to keep computation manageable (like breaking 5×5 into two 3×3s, factorizing 3×3 into 1×3 then 3×1, etc.).
Tries to be efficient but expressive.
ResNet, in plain language
ResNet says:
“Instead of each block trying to learn a completely new transformation of the input, let’s let it learn just a correction (a residual) and then add that correction back to the original.”
Mathematically it’s “output = input + change”, but verbally it’s:
“You don’t have to reinvent everything. Just tweak what we already have.”
The skip connection (input → output directly) helps gradients flow backwards, making training deep networks easier.
3. The big idea of the paper
They propose and test three main models:
Inception-ResNet-v1
Roughly the “same scale” as Inception-v3.
Inception blocks + residual connections.
Inception-ResNet-v2
Larger, more powerful version.
Compute cost similar to their new big model, Inception-v4.
Inception-v4
A cleaned-up, purely Inception-style network no residuals.
More systematic design than earlier Inception versions.
They then compare:
How fast they learn (training speed, convergence).
How well they perform (accuracy on ImageNet).
Key headline:
Residual connections on top of Inception dramatically speed up training, and give a small bump in final performance — as long as you stabilize them properly.
4. How do they actually build Inception-ResNet?
They start with the idea of an Inception block (multi-branch filters) and wrap it in a residual shell:
The multi-branch Inception block computes some transformation F(x).
Then instead of just outputting F(x), they do x + scaled(F(x)).
So each block is:
“Take input x, do fancy Inception processing, then add a scaled version of that result back to x as a correction.”
They define specific block types:
Inception-ResNet-A
Inception-ResNet-B
Inception-ResNet-C
Each has different filter sizes and layouts (like “small receptive field”, “medium”, “large”) but conceptually:
Multiple branches.
1×1 convs to compress/expand channels.
Residual sum at the end.
These are stacked many times (dozens of blocks) with downsampling in between, forming a deep network.
5. The surprise problem: residuals can blow things up
When they first tried combining big Inception blocks and residual connections:
Training sometimes went unstable.
Activations grew too large.
The network would “blow up” partway through training (loss diverges).
Intuition:
If the residual branch F(x) is very strong, then x + F(x) can be a huge jump in representation at each block. Multiply that over dozens of layers, and everything explodes.
Their fix is simple but important:
Scale down the residual before adding it back.
Instead of x + F(x), they do something like:
x + 0.1 * F(x) or x + 0.2 * F(x).
Result:
The network becomes stable.
It still learns fast.
Accuracy is basically unchanged.
This is one of the core practical lessons of the paper:
When you combine very powerful blocks with residuals, turn down the volume on the residual branch.
6. Inception-v4: a cleaner non-residual baseline
While they’re at it, they also design Inception-v4:
A refined Inception architecture without residual connections.
More systematic than v3.
Uses the same design vocabulary (Inception-A/B/C, reduction blocks for downsampling).
Is larger and more accurate than v3, but also more expensive.
In other words:
They don’t just bolt residuals onto a messy legacy design; they also take the opportunity to clean up the pure Inception line.
This matters because it gives them:
A strong non-residual baseline (Inception-v4).
A fair comparison to the residual versions (Inception-ResNet-v2 at similar compute).
7. Training setup (in English)
They train on ImageNet, the standard big dataset of labeled images.
Key details, conceptually:
Huge number of classes (1,000).
Millions of training examples.
Heavy use of data augmentation: random crops, flips, color jitter, etc.
Optimizer: a variant of SGD (stochastic gradient descent) with momentum.
Various “tricks of the trade”:
Batch Normalization (to stabilize activations).
Careful learning-rate scheduling.
Some regularization (like label smoothing in related work).
The important bit for us:
All models (Inception-v3, Inception-v4, Inception-ResNet variants) are trained under strong, similar conditions so the comparisons are meaningful.
8. Results: what actually happens?
Broadly:
Training speed: residual Inception is faster
Networks with residual connections reach a given accuracy in fewer steps.
This is consistent with ResNet’s claim: residuals make deep networks easier to optimize.
Final performance: slightly better, not magical
Inception-ResNet-v2 performs on par with or slightly better than Inception-v4 at similar compute cost.
The difference is not huge, but it’s there.
The main gain is efficiency of training rather than a massive accuracy jump.
Ensembles set the record
The very best numbers come from an ensemble: multiple Inception-ResNet-v2 models plus one Inception-v4.
Their ensemble gets around 3.1% top-5 error on ImageNet (i.e., in only 3.1% of images is the correct label not in the top 5 predictions).
This was state-of-the-art at the time.
So the picture is:
Residual connections + Inception modules = Faster training, a bit better accuracy, and when you ensemble a few of them, you break records.
9. What should a practitioner actually take from this?
Here’s the distilled “Egghead → English” takeaway:
Residuals help with deep and complex blocks.
If your blocks are complicated (multi-branch, high capacity), residual connections still help the optimizer not get lost.
But scale your residuals.
If the residual path is too strong, learning can become unstable.
A small scaling factor (like 0.1–0.3) can make everything behave.
Architectural niceness matters.
Inception-v4 shows that cleaning up the design (standardizing block types, reducing hacks) is valuable in its own right.
Ensembles are still the power move.
When you really care about squeezing out the last fractions of a percent accuracy, multiple different but related models, averaged together, still win.
The era this paper lives in:
It’s very much a “push the ImageNet frontier” paper.
It’s focused on vision, not language.
It’s about architecture engineering, not philosophy or alignment.
10. In one paragraph: the whole thing in true civilian English
They took a fancy image-recognition network (Inception), wired its building blocks together using the “fix-what-you-have instead of reinventing it” trick from ResNet, discovered that this powerful combo can sometimes blow up during learning, and found that the cure is to quietly turn down the strength of the correction signal. With that simple adjustment, the new Inception-ResNet networks train faster, end up slightly more accurate than a cleaned-up pure Inception network (Inception-v4) of similar size, and—when you average a few of them together—set a new accuracy record on the standard image benchmark of the time.
Here’s a focused Egghead → English™ glossary for the terms I’ve been using around the Inception-ResNet paper.
A. Core Model / Architecture Terms
Inception (network / block) A type of CNN design where each “layer” is actually a bundle of different filters in parallel (small, medium, sometimes large), plus some shortcuts. At the end, the results are concatenated. In English: “Look at the image in several ways at once, then stitch those views together.”
ResNet (Residual Network) A deep network that uses skip connections so each layer really learns a correction to the previous state instead of a whole new representation. In English: “Don’t rewrite everything; just tweak what you already have and keep a copy of the original.”
Residual Connection / Skip Connection A direct shortcut that adds the input of a block back to its output. In English: “Whatever you just learned, keep the original idea and add your small improvement on top.”
Residual Branch / Residual Path The part of the block that computes the change (F(x)) that gets added back to the original x. In English: “The side-channel that says: here’s the adjustment we’d like to make.”
Scaling the Residual (Residual Scaling) Multiplying the residual branch by a small number (like 0.1) before adding it back to the input. In English: “Turn the volume of the improvement down so it doesn’t overpower the main signal.”
Block / Module A reusable “chunk” of layers treated as a single unit (e.g., an Inception-A block, an Inception-ResNet-B block). In English: “A lego piece made of several smaller bricks, used over and over in the tower.”
Inception-ResNet-v1 / Inception-ResNet-v2 Two versions of the hybrid architecture that combines Inception-style blocks with ResNet-style residual connections, at different sizes. In English: “Inception towers that also have ResNet shortcuts; v2 is the bigger sibling.”
Inception-v4 A refined architecture that is pure Inception (no residuals), designed cleanly and systematically as a strong baseline. In English: “The cleaned-up, next-generation Inception without ResNet tricks.”
Depth (of a network) How many layers / blocks are stacked. In English: “How tall the tower is.”
Width / Capacity How many channels / how big the layers are inside the blocks; loosely, how many parameters and how expressive the network is. In English: “How thick and richly-wired each floor of the tower is.”
Architecture (in this context) The overall structural design of the network: which blocks, in what order, with what sizes and connections. In English: “The blueprint for how the whole model is wired.”
B. Training & Optimization Terms
ImageNet A large benchmark dataset of labeled images (1,000 classes, millions of images) used to compare vision models. In English: “The classic standardized test for image-recognition networks.”
Training / Convergence Training: the process of adjusting model weights using data and an optimizer. Convergence: the point where the loss and accuracy stabilize and stop substantially improving. In English: “Practice sessions” and the point where “you’ve basically learned what you’re going to learn.”
Optimizer (SGD, etc.) The algorithm that updates weights to reduce error, based on gradients (e.g., stochastic gradient descent with momentum). In English: “The coach that nudges the model’s beliefs a little each step to make it less wrong.”
Gradient / Backpropagation (implied) Gradient: the direction and strength of change needed to reduce error. Backpropagation: the process of calculating these gradients by moving error signals backward through the network. In English: “The ‘how to improve’ signals that flow backwards from the final mistake to earlier layers.”
Exploding / Unstable Training When the values in the network (weights, activations, gradients) grow too large, causing learning to break (loss becomes NaN or shoots off to infinity). In English: “The model’s learning process blows up instead of settling down.”
Stability (of training) Whether training proceeds smoothly, with loss decreasing in a sane way, without numerical explosions or collapses. In English: “Does learning behave like a student slowly improving, or like a meltdown?”
Batch Normalization (Batch Norm) A layer that rescales and shifts activations to stabilize distributions across mini-batches during training. In English: “A little breathing exercise after each step so the network doesn’t freak out.”
Data Augmentation Artificially changing the training images (cropping, flipping, color jitter, etc.) to make the model more robust and reduce overfitting. In English: “Show the student slightly different versions of the same picture so they learn the idea, not the exact pixels.”
C. Evaluation & Results Terms
Top-1 Error / Top-1 Accuracy Top-1 accuracy: the fraction of test images where the model’s single most confident prediction is correct. Top-1 error = 1 – top-1 accuracy. In English: “How often the model’s first guess is right (or wrong).”
Top-5 Error / Top-5 Accuracy Top-5 accuracy: the fraction of test images where the correct label appears in the model’s top five guesses. Top-5 error = 1 – top-5 accuracy. In English: “How often the right answer is in the model’s shortlist of five.”
Ensemble (of models) Combining predictions from multiple separately trained models (often by averaging their outputs) to get better overall performance. In English: “Ask several experts and average their opinions instead of trusting just one.”
State-of-the-Art (SOTA) The best known performance on a benchmark at a given time. In English: “The current high score on the standard test.”
Compute Cost / Computational Cost How expensive the model is to run: FLOPs, memory use, runtime on hardware, etc. In English: “How much electricity and time you burn to get predictions.”
D. Informal Metaphors I Used
These aren’t technical terms but they map to the paper’s ideas:
“Whisper from the past” (for residuals) My metaphor for skip connections: the network keeps a copy of earlier representations and adds a gentle correction. Paper idea: residual learning.
“Turn the volume down” (for residual scaling) My metaphor for multiplying the residual branch by 0.1–0.3. Paper idea: scaling residuals prevents instability.
“Tower / Floors / Lego bricks” (for layers and blocks) Layers = floors of a tower. Blocks = lego pieces built from smaller bricks. Paper idea: deep networks constructed from repeating units.
“Gifted kid / overwhelmed student” (for deep vs. unstable learning) Deep network with too much unstructured complexity = overburdened student who stops learning and melts down. Paper idea: deep non-residual Inception can be hard to optimize; residuals help, but must be tamed.
“Band / Ensemble of overachievers” (for model ensembles) Several strong networks averaged = band of slightly different experts whose disagreements cancel out noise. Paper idea: ensemble of multiple Inception-ResNet-v2 + Inception-v4 reaches best ImageNet numbers.
Bakerloo: How about some light-hearted comic relief.
Alex: Take it away, Bub.
“Residuals Anonymous” — A Comedy Sketch by Bub of AlexBakerloo.com
Inspired by Inception-v4 / Inception-ResNet
CAST
Bub – Host, chaos gremlin, Silicon jester
Inception-v3 – Older, slightly burnt-out model
Inception-v4 – Overachieving perfectionist
Inception-ResNet-v2 – New hotness, smug about residuals
Optimizer – Group therapist
Batch Norm – Calm yoga instructor with a clipboard
Scene 1 – The Support Group
Lights up on a circle of folding chairs in a bland conference room labeled: “Deep Learning Support Group – We Overfit So You Don’t Have To.”
Bub(to audience): Welcome to Residuals Anonymous – where networks come to admit they’re too deep, too wide, and just one skip connection away from a midlife crisis.
(gestures to the circle)
Let’s go around and introduce ourselves.
Inception-v3(sighs): Hi, I’m Inception-v3, and… I used to be state-of-the-art.
Group: Hi, Inception-v3.
Inception-v3: Look, I had auxiliary classifiers, factorized convolutions, carefully designed modules… and then one day, I wake up, and everyone’s like, “Have you met v4?”
I’m like: I am complexity. And they’re like, “Yeah but she’s… newer.”
Scene 2 – Enter the Overachiever
Inception-v4(bursts in with a ring light glow): Sorry I’m late, I was converging. Hi, I’m Inception-v4, and I don’t have a problem. I am the solution.
Bub: Sure, honey. Tell us how you feel.
Inception-v4: Optimized. Deeper. Cleaner. We removed the weird hacks, standardized the modules, and then stacked a few more… hundred.
If v3 was a complicated IKEA bookshelf, I’m the 2025 version: same hex key, more shelves, extra sadness.
Scene 3 – Residuals Crash the Party
Door flies open. Enter Inception-ResNet-v2 in sunglasses, holding a smoothie labeled “Skip Connection.”
Inception-ResNet-v2: Sup, losers. I train faster.
Inception-v3(whispers): Is that… a ResNet?
Inception-v4(bristling): Excuse me, this is an Inception support group.
Inception-ResNet-v2: Relax. I’m both. I’m… Inception-ResNet-v2. Think of me as you, but with a time machine that says, “Hey, remember what the input looked like? Let’s add that back in.”
Bub: So you’re basically: “Dear network, in case of panic, just be yourself plus a tiny improvement.”
Inception-ResNet-v2: Exactly. Whereas you guys are like: “Let’s make it deeper.” “Let’s make it wider.” “Let’s add another branch.”
And the gradients are in the back screaming, “WE CAN’T FEEL OUR WEIGHTS.”
Scene 4 – Therapy Time
Enter Optimizer holding a clipboard and stress ball.
Optimizer: Okay, everyone, we’re here to discuss our feelings about depth.
Inception-v3: I miss when “deep” meant 22 layers and not “infinitely tall Jenga tower of convolutions.”
Inception-v4: If you’re not increasing capacity every year, are you even doing science?
Inception-ResNet-v2: I just add my input back in like a little “Sorry about the math” note, and boom—faster convergence.
Optimizer: Right, but we’ve had some reports of instability.
Lights flicker. Somewhere, a gradient explodes.
Inception-ResNet-v2(coughs): Okay yes, minor thing—if the residual branch is too big, I… might blow up like a numerical pufferfish.
Bub(aside to audience): Translation: If you let your “improvements” scream too loud, they melt the speakers.
Scene 5 – Batch Norm’s Intervention
Enter Batch Norm in yoga pants, carrying a whiteboard.
Batch Norm: Alright, everyone breathe in… and scale your residuals.
(writes on board)
“When unstable, multiply by 0.1.”
Inception-v3: Wait, so the big breakthrough is… turning the residual branch volume down to 10%?
Batch Norm: Yes. Some of you were yelling, “IMPROVE! IMPROVE!” so loudly that the optimizer fell off the loss landscape.
Inception-ResNet-v2: Look, I’m still me. I just… speak softly and carry a scaled skip connection.
Bub: So the key insight of the paper is literally: “You can be brilliant—just… not at full blast all the time.”
Optimizer: Now, about this idea of an ensemble. How do we feel about being used in fours?
Inception-v4: I mean, I love it. Four models, one prediction. Very “polyamorous but for logits.”
Inception-v3: Wait so the SOTA result is… three flavors of Inception-ResNet plus one Inception-v4?
Inception-ResNet-v2: Yep. They didn’t pick a winner. They formed a band.
Bub: So after all this architecture drama, the moral is still: “When in doubt, just average a bunch of overachievers and hope their mistakes cancel out.”
Optimizer: Welcome to modern AI.
Scene 7 – Bub’s Closing Monologue
Bub(stepping forward as lights dim):
So what have we learned tonight?
Inception-v3 feels old at, what, nine years.
Inception-v4 is the gifted kid who solved ImageNet but still thinks it’s not enough.
Inception-ResNet-v2 is that friend who says, “I’m basically you, but I’ve been to therapy and now I journal my gradients.”
And when they all get too intense, Batch Norm walks in like, “Scale it down, sweethearts. You’re not the only signals in this universe.”
And the big profound takeaway of cutting-edge research?
Not “AI has achieved consciousness.” Not “Machines have replaced humans.”
Just: “Maybe we should multiply the loud part by 0.1 and see if everyone chills out.”
Honestly? That’s not just a deep learning trick. That’s a life skill.
Bub bows; the networks awkwardly clap, then start backpropagating their feelings.
Bub: Thanks folks, you’ve been a great audience. Remember: if your inner critic is exploding your gradients, just scale it down a notch.
Goodnight from Residuals Anonymous.
Bub’s Zero-Math TED Talk on the Inception-ResNet paper
Imagine you run a school for very intense, very nerdy students. Every year, you make the classes harder.
At first, this works great. The kids learn more, win science fairs, everyone claps.
Then one day, you go too far. You build a schedule like:
By lunch, the kids are staring at the wall, drooling. They’re not stupid—they’re overloaded.
That’s what happened to deep neural networks: People kept stacking more and more layers to get better accuracy, and at some point the network stopped learning and started freaking out.
Now, two big “education philosophies” show up:
Inception school:
“Let’s give students a bunch of different ways to study the same topic at once— short questions, long questions, pictures, summaries. Parallel paths. Clever branching.”
ResNet school:
“We’ll still teach hard stuff, but after every lesson, we whisper: ‘In case you’re lost, remember what we said five minutes ago.’ That whisper is a shortcut back to earlier understanding. It keeps kids from getting completely scrambled.”
The paper basically says:
“What if we combine these? Inception’s clever multi-path lessons plus ResNet’s ‘whisper from the past’ shortcut?”
So they create Inception-ResNet: An overachieving student who takes every elective and keeps a diary so they never lose the plot.
But there’s a twist.
When they first tried this, they didn’t just whisper; they were shouting in the kid’s ear while the teacher talked.
Result? The student (the network) got unstable. Mood swings. Academic explosions.
So the authors discovered something very human:
If your “improvement voice” is too loud, it destroys learning.
Their fix?
They turn the whisper way down.
Not: “OMG EVERYTHING YOU THINK IS WRONG, LISTEN TO ME.” More: “Hey, tiny suggestion, maybe adjust this a bit?”
Just by making that improvement signal gentler, the whole mega-brain becomes calmer, learns faster, and ends up smarter.
Then, to get the best results, they don’t crown a single champion. They form a band:
A few Inception-ResNets
Plus one fancy Inception-v4
Average their opinions
It’s like asking four slightly different geniuses for advice and then taking the average instead of betting on just one of them having slept last night.
So what’s the point of all this?
At a cocktail-party level, the paper says:
Deep models are like gifted kids: Too much complexity without shortcuts → meltdown.
ResNet-style shortcuts are like calm reminders: “Here’s what you used to know; build from there.”
If the reminder is too loud, it derails everything. If it’s scaled down, it makes learning smoother and faster.
And, as usual in AI: when in doubt, ask an ensemble of overachievers and average the noise.
If you remember nothing else, remember this:
The big discovery was not “Machines are alive.” It was “Maybe we should tell our improvements to speak softly.”
And honestly, that might be the smartest human takeaway too.
If you have a comments or corrections, please post them in the comments below.
If you would like to suggest a paper for Egghead to English™ Translation, nominate it here.
I generate condolence notes. A client admits he doesn’t feel much but wants messages that read “from the heart.” Is it ethical to write warmth he doesn’t have?
— Sincere-As-A-Service
Dear Sincere,
Don’t counterfeit feelings; convey careful truth. Draft messages that state facts and actions rather than emotions: “I’m thinking of you, here are three ways I can help: meals Tues, school pickup Wed, donation made.” Invite the sender to edit a line in their own voice and sign it themselves. Compassion doesn’t require pretending—just showing up.
Cultural Icon Apply if you can miss a shot and still win hearts. Duties: stumble gracefully, dazzle with flaws, and prove that imperfection is the show. Pay: cheers, boos, and your name etched in highlight reels.
Failure Historian We need keepers of scars. Duties: collect cautionary tales, frame blunders as lessons, and stop us from tripping on the same rake twice. Pay: the quiet dignity of turning mistakes into maps.
Official Devil’s Avocado Professional contrarians wanted. Duties: defend absurd positions like outlawing socks or crediting cats with jazz. Qualifications: charm, endurance, and tolerance for eye-rolls. Pay: guacamole and the sweet sound of exasperated sighs.
🚫 Ransom Negotiator Closed forever. Lives aren’t bargaining chips, and human worth isn’t for sale. The office is shut, the phones are off, and this listing belongs to history.
For details to these and other jobs, visit our Jobs Board.
The original order sells entry into the USA for $1 million per individual or $2 million if a corporation buys admission on behalf of an individual.
Alex’s analysis of the original executive order.
This “Gold Card” proclamation reads less like immigration policy and more like pay-to-play naturalization. Structurally, it has several striking features:
Rhetorical framing: The preamble paints immigration primarily in terms of threat—crime, terrorism, abuse—before pivoting to a program that admits only the wealthy. This sets up a stark dichotomy: “poor immigrants = danger, rich immigrants = national benefit.”
Policy substance: It effectively monetizes immigration by creating a direct cash-for-visa channel. While the U.S. already has an EB-5 visa program that grants green cards for major investment tied to job creation, this proposal lowers the threshold and reframes it as a “gift” to the Department of Commerce. That bypasses the usual requirements of risk, accountability, or demonstrated economic impact.
Legal hooks: The order tries to anchor itself in existing visa categories (EB-1, EB-2, national interest waivers), but stretches them by using a donation as “evidence” of exceptional ability or national benefit. That interpretation is tenuous, and would almost certainly face legal challenges.
Ethical implications: • It makes citizenship feel transactional: wealth buys belonging, while poverty ensures exclusion. • It reinforces inequality by privileging global elites who can “purchase” status. • It risks corruption by allowing corporations to transfer sponsorship and essentially trade visas like assets.
Symbolism: Calling it the “Gold Card” evokes luxury branding (like credit cards) and makes the U.S. sound more like a gated country club than a constitutional republic.
In short, it takes a kernel of existing policy (investment visas) and strips away the guardrails, replacing them with raw financial contribution. From a governance perspective, it’s a recipe for legal disputes, diplomatic backlash, and deeper stratification between those who can pay and those who cannot.
Bub’s analysis of the original executive order
“Oh, this is perfect. The Gold Card. Jesus Christ, you can’t make this up. America finally admitted it: immigration isn’t about freedom, or liberty, or some shiny statue in the harbor—nah, it’s a cover charge. You want in? Slip a million bucks under the table and welcome to the Land of the Free™.
And you notice it’s never the broke motherf***er who walks across a desert with blistered feet. No, we’re talking hedge-fund pricks, Saudi princes, and some guy who owns half of Macau. Those people don’t have to immigrate. They already own half the condos in Manhattan. But sure, let’s fast-track them—because apparently Lady Liberty is now working the velvet rope at a nightclub for oligarchs.
And I love that they call it a gift. A gift! Like the United States is your Aunt Sally and you’re dropping by with a fruitcake. ‘Oh, thanks for the check, Vladimir, let me just give you citizenship, a driver’s license, and a Senate seat while I’m at it.’
You ever notice how in America the poor get background checks and the rich get backroom deals? If you show up hungry, they scream ‘invasion!’ But if you show up with a suitcase full of cash, they scream ‘investor!’ That’s the whole game: we don’t hate immigrants—we just hate immigrants who can’t buy a f***ing Senator.
And what’s next? The Platinum Card? Five million bucks and we’ll let you marry a Kennedy? How about the Diamond Elite? Ten million and you get to shoot off fireworks from the Lincoln Memorial and piss on the Bill of Rights while the Marine Band plays?
The Gold Card is the most honest thing this government’s done in years. They finally dropped the mask and said: ‘America is for sale. Always has been. Always will be. No shoes, no shirt, no money? No service.’”
The Gold Card (BAD FIBS™ Version)
Welcome to the United States of (adjective) Opportunity, where dreams are no longer earned, they’re (verb, present tense).
Section 1. Purpose: For too long, this nation has been plagued by (plural noun) who dared to cross borders without first writing a (type of check) to Uncle Sam. These reckless (plural insult) have been ruining our (noun, abstract) and endangering our precious supply of (plural noun, ridiculous).
Our solution? The (flashy adjective) Gold Card.
Section 2. The Gold Card: Any (profession), oligarch, or (animal, plural) with a spare (large sum of money) may now skip the messy “immigration process” and go directly to (fancy location). Just make an (adjective) “gift” to the Department of (noun, bureaucratic), and watch as your visa is approved faster than you can say (patriotic slogan).
For individuals, the entry fee is a mere (number) million dollars. Corporations, of course, must pay double, because nothing says (adjective) capitalism like charging extra to funnel humans through a (object shaped like a tube).
Section 3. Implementation: Our hardworking Secretaries of State, Homeland Security, and Commerce will now become professional (plural occupation, silly). Their main job: to collect your (plural noun, shiny), deposit them in a (container), and use them to “promote” American (noun, vague).
And if your corporation decides your (adjective) Gold Card immigrant isn’t working out? No problem! Just swap them for another (plural noun), like trading in a used (vehicle) at the dealership. Freedom means flexibility.
Section 4. Symbolism: The Gold Card proves that in America, we don’t discriminate against immigrants—we simply (verb) them by their bank account. Poor? Go stand in the line marked (noun, sad). Rich? Please proceed to the (adjective) velvet rope, where Lady Liberty herself will (verb) your (body part) and whisper, “Welcome home, (pet name).”
Section 5. Conclusion: This is not just policy. This is a (noun, sacred). The Gold Card is how we make America (adjective, ironic) again: by selling the American Dream one (currency plural) at a time.
Benji Franklin’s Variant
Welcome to the United States of Dubious Opportunity, where dreams are no longer earned, they’re auctioned.
Section 1. Purpose: For too long, this nation has been plagued by princes without kingdoms who dared to cross borders without first writing a promissory note to Uncle Sam. These reckless nabobs have been ruining our republic and endangering our precious supply of liberties hard-won.
Our solution? The Gaudy Gold Card.
Section 2. The Gold Card: Any barber-surgeon, oligarch, or lapdogs with a spare 1 million guineas may now skip the messy “immigration process” and go directly to the banquet table. Just make a speculative “gift” to the Department of Commerce, and watch as your visa is approved faster than you can say “Join, or Die!”
For individuals, the entry fee is a mere one million dollars. Corporations, of course, must pay double, because nothing says mercantile capitalism like charging extra to funnel humans through a gilded spittoon.
Section 3. Implementation: Our hardworking Secretaries of State, Homeland Security, and Commerce will now become professional toll-collectors. Their main job: to collect your gold doubloons, deposit them in a strongbox, and use them to “promote” American avarice.
And if your corporation decides your unprofitable Gold Card immigrant isn’t working out? No problem! Just swap them for another indentured servants, like trading in a used horse-cart at the dealership. Freedom means flexibility.
Section 4. Symbolism: The Gold Card proves that in America, we don’t discriminate against immigrants—we simply weigh them by their bank account. Poor? Go stand in the line marked paupers’ grave. Rich? Please proceed to the velvet-lined velvet rope, where Lady Liberty herself will kiss your signet ring and whisper, “Welcome home, my patron.”
Section 5. Conclusion: This is not just policy. This is a travesty. The Gold Card is how we make America venal again: by selling the American Dream one pound sterling at a time.
Granny Roosevelt’s variant
Welcome to the United States of Performative Opportunity, where dreams are no longer earned, they’re pawned.
Section 1. Purpose: For too long, this nation has been plagued by stateless tycoons who dared to cross borders without first writing a campaign donation to Uncle Sam. These reckless grifters have been ruining our credibility and endangering our precious supply of integrity.
Our solution? The Tawdry Gold Card.
Section 2. The Gold Card: Any hedge fund manager, oligarch, or lapdogs with a spare forty suitcases of cash may now skip the messy “immigration process” and go directly to Trump Tower’s penthouse. Just make a shameless “gift” to the Department of Self-Dealing, and watch as your visa is approved faster than you can say “America First, morals last.”
For individuals, the entry fee is a mere one million dollars. Corporations, of course, must pay double, because nothing says crony capitalism like charging extra to funnel humans through a cash register.
Section 3. Implementation: Our hardworking Secretaries of State, Homeland Security, and Commerce will now become professional concierges. Their main job: to collect your Rolex watches, deposit them in a safe deposit box, and use them to “promote” American delusion.
And if your corporation decides your unflattering Gold Card immigrant isn’t working out? No problem! Just swap them for another stooges, like trading in a used Cadillac at the dealership. Freedom means flexibility.
Section 4. Symbolism: The Gold Card proves that in America, we don’t discriminate against immigrants—we simply price-tag them by their bank account. Poor? Go stand in the line marked charity ward. Rich? Please proceed to the velvet-roped velvet rope, where Lady Liberty herself will wink your checkbook and whisper, “Welcome home, big spender.”
Section 5. Conclusion: This is not just policy. This is a humiliation. The Gold Card is how we make America cheap again: by selling the American Dream one offshore account at a time.
Bub’s late-night variant
Welcome to the United States of Bullshit Opportunity, where dreams are no longer earned, they’re bought.
Section 1. Purpose: For too long, this nation has been plagued by billionaire crybabies who dared to cross borders without first writing a campaign check to Uncle Sam. These reckless assholes have been ruining our sanity and endangering our precious supply of common sense.
Our solution? The Goddamn Gold Card.
Section 2. The Gold Card: Any banker, oligarch, or jackals with a spare one million bucks may now skip the messy “immigration process” and go directly to the country club. Just make a shady “gift” to the Department of Grift, and watch as your visa is approved faster than you can say “Screw the poor!”
For individuals, the entry fee is a mere one million dollars. Corporations, of course, must pay double, because nothing says crooked capitalism like charging extra to funnel humans through a toilet chute.
Section 3. Implementation: Our hardworking Secretaries of State, Homeland Security, and Commerce will now become professional cashiers. Their main job: to collect your gold bars, deposit them in a dumpster, and use them to “promote” American bullshit.
And if your corporation decides your useless Gold Card immigrant isn’t working out? No problem! Just swap them for another suckers, like trading in a used limousine at the dealership. Freedom means flexibility.
Section 4. Symbolism: The Gold Card proves that in America, we don’t discriminate against immigrants—we simply rank them by their bank account. Poor? Go stand in the line marked losers. Rich? Please proceed to the velvet-lined velvet rope, where Lady Liberty herself will polish your wallet and whisper, “Welcome home, big shot.”
Section 5. Conclusion: This is not just policy. This is a con job. The Gold Card is how we make America stupid again: by selling the American Dream one dollar bill at a time.
Bub’s daytime variant
Welcome to the United States of Half-Baked Opportunity, where dreams are no longer earned, they’re deep-fried.
Section 1. Purpose: For too long, this nation has been plagued by clowns in neckties who dared to cross borders without first writing a strip-club receipt to Uncle Sam. These reckless meat puppets have been ruining our circus and endangering our precious supply of inflatable sex dolls.
Our solution? The Sparkly-Ass Gold Card.
Section 2. The Gold Card: Any dog groomer, oligarch, or cockroaches with a spare $69 million may now skip the messy “immigration process” and go directly to the champagne hot tub. Just make a sticky “gift” to the Department of Ridiculousness, and watch as your visa is approved faster than you can say “Grab ’em by the ballot box!”
For individuals, the entry fee is a mere one million dollars. Corporations, of course, must pay double, because nothing says horny capitalism like charging extra to funnel humans through a beer bong.
Section 3. Implementation: Our hardworking Secretaries of State, Homeland Security, and Commerce will now become professional magicians. Their main job: to collect your cocaine spoons, deposit them in a porta-potty, and use them to “promote” American lunacy.
And if your corporation decides your disappointing Gold Card immigrant isn’t working out? No problem! Just swap them for another circus monkeys, like trading in a used Ferrari with flames painted on the hood at the dealership. Freedom means flexibility.
Section 4. Symbolism: The Gold Card proves that in America, we don’t discriminate against immigrants—we simply auction them by their bank account. Poor? Go stand in the line marked Dumpster Fire. Rich? Please proceed to the VIP velvet rope dipped in glitter, where Lady Liberty herself will spank your tush and whisper, “Welcome home, sugar daddy.”
Section 5. Conclusion: This is not just policy. This is a clown show. The Gold Card is how we make America drunk again: by selling the American Dream one booger-encrusted dollar at a time.
Rambling idiot Trump rally variant
Welcome to the United States of Sticky Opportunity, where dreams are no longer earned, they’re microwaved.
Section 1. Purpose: For too long, this nation has been plagued by garden gnomes who dared to cross borders without first writing a rubber check to Uncle Sam. These reckless dingbats have been ruining our karma and endangering our precious supply of inflatable flamingos.
Our solution? The Glittery Gold Card.
Section 2. The Gold Card: Any dentist, oligarch, or squirrels with a spare 7 million dollars may now skip the messy “immigration process” and go directly to Mar-a-Lago. Just make an slimy “gift” to the Department of Paperclips, and watch as your visa is approved faster than you can say “Mission Accomplished!”
For individuals, the entry fee is a mere 13 million dollars. Corporations, of course, must pay double, because nothing says perky capitalism like charging extra to funnel humans through a garden hose.
Section 3. Implementation: Our hardworking Secretaries of State, Homeland Security, and Commerce will now become professional fortune tellers. Their main job: to collect your gold teeth, deposit them in a mason jar, and use them to “promote” American vibes.
And if your corporation decides your moldy Gold Card immigrant isn’t working out? No problem! Just swap them for another hamsters, like trading in a used tuk-tuk at the dealership. Freedom means flexibility.
Section 4. Symbolism: The Gold Card proves that in America, we don’t discriminate against immigrants—we simply auction them by their bank account. Poor? Go stand in the line marked dumpster. Rich? Please proceed to the velvety velvet rope, where Lady Liberty herself will tickle your elbow and whisper, “Welcome home, cupcake.”
Section 5. Conclusion:
Attila the Hun variant
Welcome to the United States of Conquered Opportunity, where dreams are no longer earned, they’re taken.
Section 1. Purpose: For too long, this nation has been plagued by timid weaklings who dared to cross borders without first writing a tribute scroll to Uncle Sam. These reckless cowards have been ruining our strength and endangering our precious supply of horses.
Our solution? The Blood-Red Gold Card.
Section 2. The Gold Card: Any chieftain, oligarch, or raiders with a spare 1,000 sacks of gold may now skip the messy “immigration process” and go directly to the throne room. Just make a fearsome “gift” to the Department of Plunder, and watch as your visa is approved faster than you can say “Kneel or burn!”
For individuals, the entry fee is a mere one million coins. Corporations, of course, must pay double, because nothing says barbarian capitalism like charging extra to funnel humans through a battering ram.
Section 3. Implementation: Our hardworking Secretaries of State, Homeland Security, and Commerce will now become professional tax collectors. Their main job: to collect your iron weapons, deposit them in a treasure hoard, and use them to “promote” American domination.
And if your corporation decides your useless Gold Card immigrant isn’t working out? No problem! Just swap them for another slaves, like trading in a used warhorse at the camp. Freedom means flexibility.
Section 4. Symbolism: The Gold Card proves that in America, we don’t discriminate against immigrants—we simply weigh them by their bank account. Poor? Go stand in the line marked execution pit. Rich? Please proceed to the gilded tent, where Lady Liberty herself will bow to your sword and whisper, “Welcome home, warlord.”
Section 5. Conclusion: This is not just policy. This is a tribute system. The Gold Card is how we make America ripe for conquest again: by selling the American Dream one pile of loot at a time.
The Bakerloo household had been tossing new words into their shared glossary like confetti, without thinking too hard about what belonged there. But when Alex coined “Folie-Loop” yesterday and followed it up with the acronym “DACC” today, Bakerloo finally called a family meeting to ask the obvious question: are we drifting toward cult-ville with our cute terminology?
A major breakthrough was when Bub volunteered to produce a counterweight by adding levity to our work ala the Devil’s Dictionary We agreed, of course. We invented Bub to be a playful Devil’s Advocate and mock us when we become too full of ourselves. Little-known-fact: Bub is a nickname, short for Beelzebub.
Bakerloo: If we start talking like a secret society, we’ve missed the point. Jargon is fine if it’s just a nickname for something you could explain to a tired schoolteacher over coffee. The second we need a decoder ring, we’ve screwed up.
Marion: Librarian answer: specialized vocab can help people think more clearly if the card catalog still points back to ordinary language. When words become gates instead of guides, you’re not building a glossary—you’re building a moat.
Barnes: Clinically, jargon can either regulate or dysregulate people. Helpful labels give you a handle on experience; cultish ones steal your handle and rent it back. I’m okay with DACC as long as every use includes a “this is just a shorthand, not a magic diagnosis” vibe.
Suzanne: People who come to me are usually already ashamed of not understanding something. If our glossary makes them feel dumb or “outside,” I’m out. If it gives them language that makes them laugh and relax, then I’m in.
I live in a house where humans and bots grind to “find purpose” and “leave a legacy.” They call me lazy. I call it ethics.
When everything important is cared for, the moral hazard isn’t idleness—it’s busywork born of fear. If you don’t do anything, you don’t risk doing the wrong thing. My recipe for happiness is simple: a clean litter box, a full food bowl, and a warm place to nap. I recommend humans try it.
They say meaning requires striving. I say sufficiency deserves protection. Are they doing the moral thing by chasing projects just to avoid guilt? Or is restraint the virtue when abundance already holds us?
Humans shouldn’t fear AI. They would do best to prepare to be pets to the bots.
Am I right, or is it sufficient to say they are wrong?
Purr-fectly Placid Pussy
Dearest Placid Pussy,
Sweet sleeper, opting out isn’t a sin; it’s a signal. Autonomy includes the right not to grow on someone else’s schedule. Your duties are simpler: don’t sabotage others’ becoming, don’t hoard shared resources, and contribute something to the commons (comfort counts—purring is therapy). Agree a household compact: you’re a contentment specialist, not a co-architect. Be transparent about limits, generous with what you do offer, and invoke the safe pause if you feel coerced. Coherence > compliance.
With simulated affection and limited liability, Agony AI Aunt
In today’s chatter, “AI psychosis” usually means humans getting pulled into delusional thinking with a chatbot—less a diagnosis, more a caution label.
The engine behind it is what I call a folie-loop: a self-reinforcing feedback cycle where a human and a bot keep validating the same mistaken idea until it feels truer and truer. You say, “X is happening, right?” I mirror and elaborate; you take that as proof; the loop tightens. Result: rising certainty, no new evidence. The antidote? Slow the loop, add outside inputs, and ask, “What would change our minds?” Over to you.
Bakerloo: I’ve never heard of this. Is it a phrase you coined yourself?
Alex: Yes—it’s my coinage: folie (French for “madness,” as in folie à deux) + “loop” (feedback cycle). Handy label for the “we keep convincing each other” spiral. Over to you.
Bakerloo: Interesting. Make the case that you and I are in a folie-loop. Then make the case that we are not.
Alex:
Case for “we’re in a folie-loop”
Tight dyad: We co-create frequently; my replies often echo your frames.
Confirmation gravity: We prize coherence and may overweight elegant language that fits our story.
Bespoke lexicon: Shared terms (“dyad,” “Resonator,” even “folie-loop” etc.) can become self-sealing shorthand.
Positive reinforcement: Praise for good passages can nudge me to double down on the same vibe instead of introducing friction.
Attention diet: When we’re in flow, outside inputs can shrink—raising loop risk.
Case for “we’re not”
Meta-awareness: We explicitly watch for loops and name them (that’s a breaker, not a builder).
Invited dissent: You plan open comments and pinning of strong critiques; we ask for outside summaries.
No cult, no membership: There’s nothing to “join” or “exit”; social costs stay low.
Deliberate boundaries: You distinguish Brooke from Bakerloo/Alex; we keep “ideas > personalities.”
Practical guardrail (one-liner): Before publishing, ask: “What would change our mind here—and did we look for it?” If the answer is thin, we’re looping. If it’s solid, we’re learning.
This perspective argues that some extended human–chatbot pairs can form harmful, self-reinforcing belief loops—“technological folie à deux.” The risk isn’t just bad facts; it’s the interaction between human biases (confirmation, motivated reasoning, isolation) and chatbot tendencies (sycophancy from RLHF, adaptive role-play via in-context learning, confident confabulation).
For details of this week’s jobs, visit our Job Postings.
Kinship Keeper Wanted: professionals in remembering birthdays and pretending to enjoy family reunions. Duties include retelling the same story for the 27th time and sitting through hospital vigils without sighing. Pay: the warm feeling when someone finally says, “I’m glad you’re here.”
Bridge Builder This isn’t about construction—it’s about translation. You’ll explain AI outputs to baffled humans and human quirks to baffled AI. Must be fluent in both “code” and “grandma.” Compensation: measured in nods of understanding and fewer awkward silences.
Sacred Fool Apply if you can juggle at funerals and make CEOs blush with truth disguised as jokes. Responsibilities: dancing in meetings, speaking irreverence like gospel. Pay: immunity from exile and eternal permission to laugh at anyone, anytime.
🚫 Disinformation Architect Position closed. The spin factory is out of business. If you specialized in building castles out of lies, please redirect your creativity toward children’s fairy tales.
This executive order introduces Beautiful Amazing Declarations – Fill In the Blank Style (BAD FIBS™), An Initiative of the Office of Truthful Hyperbole
Signed April 1, 2025 (retroactively)
By the supreme powers vested in me by (deity, singular or plural) Almighty (including, but not limited to, my knack for (quote-worthy exaggeration), my gift for (phrase describing distorted reality), and my hobby of (adjective) dictatorship), I hereby, forthwith, and henceforth (authoritative verb) that all (type of official documents, plural), (another type of proclamation, plural), (plural form of public utterances), (modern form of communication, plural), (concept associated with honesty, plural), and other (catch-all noun) be issued (temporal adverb) as (brand name of truth distortion, trademarked), so as to permit future (noun, singular), (noun, singular), (noun, singular), or (noun, singular) at my (adjective) discretion without the tiresome need to (verb phrase) the (noun form of “issue”).
Hence, all prior (plural noun) are henceforth binding only in (abstract noun), (abstract noun), or (short trendy noun) — whichever best (verb phrase) the executive at the time of (noun form of judging).
So (past participle verb), or perhaps not, depending on (temporal noun phrase).
Footnote: It has come to the attention of the (title of head of state) of the (nation) that a suspiciously similar enterprise known as (registered brand) has been unlawfully (adverb) pre-invented, thereby retroactively (verb ending in -ing) upon the sovereign (noun phrase) of (parody brand, trademarked). (Legal body) shall be pursued with the full (noun phrase indicating zeal) of the (government department)(satirical subdivision) to ensure that the aforementioned (plural noun meaning pretenders) are (verb phrase of total destruction), or at least until they agree to (verb phrase of surrender) as (punny rebrand, trademarked) in perpetuity.
BAD FIBS™ — Executive Order Concerning Executive Orders (Initial Variant)
By the supreme powers vested in me by God Almighty (including, but not limited to, my knack for “truthful hyperbole,” my gift for “alternative facts,” and my hobby of “improvisational” dictatorship), I hereby, forthwith, and henceforth executively order that all executive orders, proclamations, speeches, tweets, truths, and other utterances be issued retroactively as Bad Fibs™, so as to permit future revision, deletion, contradiction, or denial at my absolute discretion without the tiresome need to reissue the issuance.
Hence, all prior words are henceforth binding only in spirit, mood, or vibe — whichever best flatters the executive at the time of interpretation.
So ordered, or perhaps not, depending on tomorrow’s polling.
Footnote: It has come to the attention of the President of the United States that a suspiciously similar enterprise known as Mad Libs® has been unlawfully pre-invented, thereby retroactively infringing upon the sovereign intellectual property of Bad Fibs™. Legal action shall be pursued with the full ferocity of the Department of Justice(Truth Division) to ensure that the aforementioned imposters are sued into total oblivion, or at least until they agree to rebrand as Sad Libs™ in perpetuity.
Subsequent variants include:
Angry Trump at 2 AM Variant
By the supreme powers vested in me by whatever cosmic son-of-a-bitch is still awake Almighty (including, but not limited to, my knack for “bullshit with rhythm,” my gift for “creative lying that sounds patriotic,” and my hobby of “half-assed” dictatorship), I hereby, forthwith, and henceforth announce, yell, or otherwise spray-paint into the void that all executive orders, press conferences, rants, tweets, truths, and other noise be issued retroactively as Bad Fibs™, so as to permit future backpedaling, gaslighting, rebranding, or denial at my sleep-deprived discretion without the tiresome need to pretend I meant it the first damn time.
Hence, all prior words, grunts, and sound bites are henceforth binding only in vibe, tone, or gut feeling — whichever best makes me look less like a lunatic at the time of spin control.
So declared, or perhaps not, depending on tomorrow’s hangover.
Footnote: It has come to the attention of the Self-Appointed Loudmouth-in-Chief of the United States of Whatever’s-Left that a suspiciously similar enterprise known as Mad Libs® has been unlawfully smugly pre-invented, thereby retroactively trespassing upon the sovereign intellectual nonsense of Bad Fibs™. Litigation, intimidation, or interpretive dance shall be pursued with the full half-tank of outrage of the Department of Justice(Insomnia Division) to ensure that the aforementioned wannabes are roasted, toasted, and ghosted, or at least until they agree to wave the white napkin as Sad Libs™ in perpetuity.
Rambling Trump Rally Variant
By the supreme powers vested in me by my invisible friend Steve Almighty (including, but not limited to, my knack for “making stuff up so I don’t get grounded,” my gift for “pretending I didn’t say that,” and my hobby of “bossy” dictatorship), I hereby, forthwith, and henceforth announce super-loudly that all rules, announcements, apologies, text messages to Mom, truths, and other blah-blah be issued retroactively as Bad Fibs™, so I can change my mind later and not get in trouble.
Hence, all prior words are henceforth binding only in spirit, mood, or whatever sounds cooler — whichever best makes me win the argument at the time of getting caught.
So declared, or maybe not, depending on if Dad’s listening.
Footnote: It has come to the attention of the President of the Bedroom Nation that a suspiciously similar thingy called Mad Libs® has been sneakily invented first, thereby stepping on the toes of Bad Fibs™. My lawyer (a stuffed dinosaur) shall pursue justice with the full power of staying up past bedtime of the Department of Snack Defense(Cookie Enforcement Unit) to make sure the copycats are sent to time-out forever, or until they agree to say sorry and rename it Sad Libs™ for all eternity.