Seedance 2.0: The AI Video Tool That Has Hollywood Threatening Lawsuits
ByteDance just dropped an AI video tool so good that Disney, Paramount, Netflix, Warner Bros., SAG-AFTRA, and the entire Motion Picture Association all sent legal threats within one week of launch.
That’s not hyperbole. That actually happened.
Seedance 2.0 launched on February 12, 2026, and within 24 hours, people were generating eerily convincing clips of Tom Cruise fighting Brad Pitt on a rooftop. Within 48 hours, Disney’s lawyers were calling it a “virtual smash-and-grab” of their intellectual property. By the end of the week, every major Hollywood studio had piled on.
So what exactly is this tool? Is it really that good? Can you actually use it? And should you?
Let’s break it all down.
What Is Seedance 2.0?
Seedance 2.0 is ByteDance’s next-generation AI video generator. If that company name sounds familiar, it should. ByteDance is TikTok’s parent company. And now they’ve built what many people are calling the most capable AI video model on the planet.
Here’s what it does at a glance:
- Generates videos up to 15 seconds long at 1080p resolution
- Accepts text, images, video clips, and audio as inputs (up to 12 reference files at once)
- Creates video and audio simultaneously in a single generation
- Supports cinematic camera movements, multi-shot storytelling, and creative effects
That last point is the game-changer. Every other major AI video tool (Sora, Veo, Kling) generates silent video. You make the clip, then you go find music, add sound effects, maybe do voiceover work separately. Seedance 2.0 generates the audio baked right into the video. Sound effects, dialogue, ambient noise, even lip-synced speech. All in one pass.
If you’ve been following the AI video space, you know that audio has always been the missing piece. A beautiful AI-generated clip with no sound feels like a tech demo. A beautiful clip with matched audio feels like a movie. That’s the leap Seedance 2.0 makes.
How Seedance 2.0 Actually Works
You don’t need to understand the architecture to use the tool, but knowing the basics helps you get better results.
Seedance 2.0 uses what ByteDance calls a Dual-Branch Diffusion Transformer. In plain English: it has two “brains” working in parallel. One generates the video. The other generates the audio. They’re synchronized throughout the entire generation process, which is why the lip-sync and sound effects match so well.
Previous AI video tools used a different architecture (called U-Net) that was basically inherited from image generators. Seedance 2.0 swaps that out for a transformer-based system, which is the same fundamental approach that powers GPT and other large language models. Bigger model, more parameters, better results.
The Four Input Types
What makes Seedance 2.0 uniquely flexible is its multimodal input system. You can feed it:
- Text prompts — Describe what you want. “A samurai walking through a foggy bamboo forest at dawn, camera slowly tracking from behind.”
- Reference images — Upload a photo of a character, location, or style you want the video to match.
- Video clips — Feed in existing footage to extend, modify, or use as a motion reference.
- Audio files — Provide music, dialogue, or sound effects that the generated video should sync to.
You can combine all four at once, uploading up to 12 reference files in a single generation. No other AI video model offers this level of input flexibility.
What You Can Actually Create
Let’s get specific about the features, because “AI video generator” is a broad label and the details matter.
Text-to-Video: The basics. Type a description, get a video. But the prompt adherence here is noticeably better than competitors. Complex scene descriptions with specific camera movements actually produce what you asked for.
Image-to-Video: Upload a still photo and Seedance will animate it into a video. This is huge for creators who already have character designs, product shots, or concept art they want to bring to life.
Video-to-Video Editing: This is where it gets interesting. Upload existing footage and modify specific parts of it. Swap a character, change the lighting, extend a scene, add effects. You’re not starting from scratch every time.
Native Audio Generation: Sound effects, dialogue, ambient noise, and music generated alongside the video. The lip-sync is automatic. You can even specify dialogue in multiple languages, including Spanish and Korean.
Multi-Shot Storytelling: Maintain character and scene consistency across multiple clips. If you’re building a short film or content series, your characters will actually look the same from shot to shot. (If you’ve struggled with this in other tools, check out our guide to character consistency in AI video for tips that work across platforms.)
Camera Control: Replicate specific cinematic camera techniques. Hitchcock zoom, 360-degree surround shots, drone flyovers, follow-cams. You describe the camera movement in your prompt and the model executes it.
Creative Effects: Particle dispersal, ink diffusion, shatter effects, outfit changes with motion blur. The kind of VFX work that would take a professional hours or days.
The Success Rate Advantage
Here’s a number that doesn’t get enough attention: ByteDance claims a 90%+ generation success rate with Seedance 2.0.
If you’ve used AI video tools before, you know the pain of the generation lottery. With earlier models, maybe 1 out of 5 generations was actually usable. You’d burn through credits generating the same prompt over and over, hoping the next one wouldn’t have melted fingers or a face that randomly morphs halfway through.
A 90% success rate means you’re getting usable output on nearly every attempt. That’s a massive quality-of-life improvement, and it makes the tool dramatically more cost-effective since you’re not wasting credits on failed generations.
The Hollywood Firestorm: A Complete Timeline
Now for the part everyone’s talking about.
Day One: The Viral Moment (February 13, 2026)
Less than 24 hours after launch, X user @RuairiRobinson posted a video of Tom Cruise fighting Brad Pitt on a rooftop. He claimed it was made with “a 2-line prompt in Seedance 2.” The clip went massively viral.
People weren’t just impressed. They were shaken.
Rhett Reese, the screenwriter behind Deadpool and Zombieland, watched the clip and posted: “I hate to say it. It’s likely over for us.”
That quote ricocheted across social media and news outlets for days. A working Hollywood screenwriter publicly admitting that AI video had crossed a threshold. Not in some theoretical future. Right now.
But the viral Tom Cruise video was just the spark. What followed was a wildfire.
Day Two: Disney Strikes (February 14, 2026)
On Valentine’s Day, Disney sent a formal cease-and-desist letter to ByteDance. The letter, sent by attorney David Singer of Jenner & Block, did not mince words.
Disney accused ByteDance of a “virtual smash-and-grab of Disney’s IP” and claimed the company had supplied Seedance with a “pirated library” of copyrighted characters. The letter specifically named Spider-Man, Darth Vader, Grogu (Baby Yoda), Peter Griffin from Family Guy, and various Marvel and Star Wars characters.
The language was pointed: “We believe this is just the tip of the iceberg, which is shocking considering Seedance has only been available for a few days.”
Disney called the infringement “willful, pervasive, and totally unacceptable.”
Now, it’s worth noting that Disney isn’t anti-AI. They signed a three-year licensing deal with OpenAI back in December 2025. They’ve been exploring AI tools internally. Their issue isn’t with AI video generation as a concept. It’s with a company training a model on Disney’s copyrighted works without permission and then letting users generate unlimited Disney content for free.
There’s a big difference between “we’ll license our IP to AI companies on our terms” and “anyone with an internet connection can now generate unlimited Spider-Man content.”
Day Three: Paramount Joins In (February 15, 2026)
Paramount sent their own cease-and-desist. Their complaint was even more specific: they claimed Seedance-generated content featuring their franchises was “often indistinguishable, both visually and audibly, from actual Paramount films and TV.”
That “audibly” part matters. Because Seedance generates audio, you’re not just getting a visual approximation of a copyrighted character. You’re getting something that sounds like it came from the actual studio.
The MPA Goes Nuclear (February 13-21, 2026)
The Motion Picture Association, the industry trade group representing Disney, Warner Bros. Discovery, Paramount, Netflix, Sony, and Universal, issued a statement through CEO Charles Rivkin:
“In a single day, the Chinese AI service Seedance 2.0 has engaged in unauthorized use of U.S. copyrighted works on a massive scale.”
The MPA demanded ByteDance “immediately cease its infringing activity.” By February 21, the MPA had sent its own formal cease-and-desist letter to ByteDance.
SAG-AFTRA Weighs In
The actors’ union wasn’t about to stay quiet either. SAG-AFTRA released a statement condemning “the blatant infringement enabled by ByteDance’s new AI video model Seedance 2.0.”
Their specific concern went beyond character IP: “The infringement includes the unauthorized use of our members’ voices and likenesses.”
The union called Seedance 2.0 a tool that “disregards law, ethics, industry standards and basic principles of consent.”
This matters because it’s not just about Spider-Man. It’s about real people. Real actors whose faces and voices are being used without permission to generate content they never agreed to appear in.
Netflix and Warner Bros. Pile On (February 18, 2026)
By February 18, Netflix and Warner Bros. had sent their own cease-and-desist letters. Every major Hollywood studio was now aligned against ByteDance.
CAA, one of the biggest talent agencies in entertainment, said it was “talking directly with ByteDance to address Seedance 2.0’s brazen disregard for creators’ rights.”
The Human Artistry Campaign, a coalition backed by Hollywood unions and trade groups, called Seedance 2.0 “an attack on every creator around the world.”
ByteDance’s Response
ByteDance responded on February 16 with a carefully worded statement: “ByteDance respects intellectual property. We are taking steps to strengthen current safeguards as we work to prevent the unauthorised use of intellectual property and likeness by users.”
They pledged to add safeguards. They did not pull the model. They did not apologize. They did not admit to training on copyrighted data.
This is the standard playbook: launch fast, deal with consequences later. And it’s worth noting that ByteDance isn’t the first AI company to face these accusations. Google, OpenAI, Stability AI, and others have all dealt with similar (if less dramatic) legal pressure. ByteDance just launched with far fewer guardrails than their Western competitors, which made the problem impossible to ignore.
The Face-to-Voice Privacy Scandal
Before the Hollywood drama even started, Seedance 2.0 had already triggered a different controversy.
On February 10, two days before the public launch, Chinese tech reviewer Pan Tianhong (known online as Yingshi Jufeng / MediaStorm) demonstrated something genuinely unsettling. He uploaded a single facial photo to Seedance 2.0, with no voice samples, no audio reference, just a picture of a face. The tool generated video with audio that was nearly identical to the person’s real speaking voice.
Read that again. A single photo of someone’s face. And the AI reconstructed their approximate voice.
This means the model learned face-to-voice correlations from its training data so thoroughly that it could infer how someone sounds from how they look. The implications for privacy, identity theft, and deepfakes are enormous.
ByteDance immediately suspended the real-person reference feature. They also tightened identity verification requirements after an unprompted Keanu Reeves lookalike appeared in unrelated generations.
This feature remains suspended as of this writing. But the fact that it worked at all tells you something about the depth of data this model was trained on.
How Seedance 2.0 Compares to Competitors
Let’s cut through the hype and look at how Seedance 2.0 actually stacks up against the other major AI video tools available right now.
Seedance 2.0 vs. Sora 2 (OpenAI)
Sora 2 has the better physics simulation. Objects interact more realistically, and it handles complex physical scenarios (liquids, collisions, cloth dynamics) with more accuracy.
But Seedance 2.0 wins on audio, creative control, and arguably motion quality. Sora generates silent video. Seedance generates video with synchronized sound. And while Sora 2’s raw capability is impressive, OpenAI has restricted it heavily. As one Reddit user put it: “Sora 2 mogs Seedance 2.0 already BUT, as OpenAI usually does, they censored and lobotomized the model so much that it’s now behind the competition.”
The guardrails on Sora make it less useful for creative work. Seedance’s relative lack of guardrails is exactly what created the Hollywood controversy, but it’s also why creators find it more versatile.
Seedance 2.0 vs. Kling 3.0 (Kuaishou)
Kling 3.0 has the resolution advantage: it can do 4K at 60fps, while Seedance maxes out at 1080p. If raw visual fidelity is your priority, Kling wins.
But Seedance beats Kling on audio (Kling doesn’t generate audio natively), multimodal input flexibility, and multi-shot character consistency. Kling also has a generous free tier, making it the more accessible option for creators on a budget.
For a full breakdown of the current landscape, see our comparison of the best AI video tools available right now.
Seedance 2.0 vs. Veo 3.1 (Google)
Veo 3.1 is the “broadcast ready” option. It outputs at cinema-standard 24fps with a polished, professional aesthetic. It also has the best developer API if you’re building applications.
Seedance 2.0 has better audio quality, more creative control through its multimodal input system, and better performance in motion-heavy action scenes. If you’re creating content that needs dynamic movement, fight choreography, or chase sequences, Seedance handles it better.
The Reddit Consensus
Real users on Reddit (not press releases, not marketing copy) have been vocal about where Seedance 2.0 stands:
From r/singularity: “Seedance wins… Kling 2nd place.”
From r/singularity: “This is just pure insanity. We are here now… people will be able to create entire worlds.”
From r/singularity: “Honestly, Seedance 2.0 physics are insane compared to Runway.”
And the quote that sums up the existential mood: “Ok, this is the first AI-generated video that actually makes me think that Hollywood might end up being dead in the near future.”
The general sentiment across Reddit threads is excitement about quality, concern about copyright implications, and frustration about China-only access.
How to Access Seedance 2.0 Right Now
Here’s the situation as of February 21, 2026.
Current Access (China Only)
Seedance 2.0 is currently available through:
- Jimeng (Jianying) — ByteDance’s creative platform in China, accessible at Jianying.com
- Xiaoyunque (小云雀 / “Little Skylark”) — A dedicated app on Android and iOS
- Both require a Chinese Douyin user ID
If you’re outside China, you cannot officially access Seedance 2.0 yet.
Global Launch (Coming Very Soon)
ByteDance has confirmed that Seedance 2.0 will be integrated into CapCut, their global video editor that millions of TikTok creators already use. This is the access point most creators should be watching for.
According to community intel, the global launch is expected through:
- Dreamina (dreamina.capcut.com) — ByteDance’s international AI creative platform, reportedly launching Seedance 2.0 around February 25, 2026
- CapCut integration — Coming alongside or shortly after the Dreamina launch
However, the Hollywood controversy could delay things. When every major studio and the MPA are sending you legal threats, a global rollout gets more complicated.
Step-by-Step: What to Do When Global Access Launches
- Go to Dreamina (dreamina.capcut.com) or open CapCut
- Create an account or sign in with your existing CapCut/TikTok account
- Look for the Seedance 2.0 model in the AI video generation options
- Start with a simple text prompt to get a feel for the tool before uploading reference files
- Experiment with image-to-video by uploading a character design or product photo
- Try multi-input generation by combining a text prompt with an image and audio reference
- Use the video extension feature to build longer sequences from your best clips
We’ll publish a full hands-on walkthrough as soon as the global launch goes live.
Warning: Watch Out for Fake Sites
This is important. Reddit users have flagged that Google search results are full of sketchy sites claiming to offer Seedance 2.0. Many are third-party API wrappers that charge a premium and may not actually provide the full Seedance 2.0 experience.
The only official sources are:
- seed.bytedance.com (ByteDance’s research page)
- Jimeng/Jianying (China)
- Dreamina/CapCut (global, coming soon)
If it’s not one of those, proceed with extreme caution.
Pricing: What Will Seedance 2.0 Cost?
Based on current Chinese pricing and expected global pricing:
Membership: Approximately $9.60 USD per month (69 RMB on Jimeng/Dreamina). There’s a 1 RMB trial available in China, and new signups get around 260 bonus credits.
Free tier: The Xiaoyunque app is currently in a free trial phase. The standard platform gives roughly 15 seconds of generation per day for free.
Credit costs:
- Standard video (up to 15 seconds): 180 credits
- HD video: 240 credits
Cost per video: ByteDance claims a standard VFX shot costs roughly 3 RMB, which is about $0.42 USD. Even if global pricing is higher, we’re talking about single-digit dollars per clip.
Third-party platforms (various wrappers and integrations) range from $9-$20/month, with enterprise tiers around $49.90/month.
For comparison, Sora 2 and Veo 3.1 are significantly more expensive per generation. If you’re producing volume, Seedance’s pricing is a major advantage.
If you’re thinking about how to turn AI-generated video into income, check out our guide to monetizing AI video content. The economics of tools like Seedance are making new business models viable for solo creators.
What the Copyright Drama Means for Creators
This is the section that actually matters for your work. Let’s be real about the implications.
The Risk to You, Personally
When you generate a video of Spider-Man using Seedance 2.0, the legal risk doesn’t just sit with ByteDance. It also sits with you.
Disney’s cease-and-desist targets ByteDance, but copyright law also covers distribution. If you post AI-generated Spider-Man content on YouTube or TikTok, you could face:
- DMCA takedown notices
- Content strikes on your account
- In extreme cases, legal action (especially if you’re monetizing the content)
The fact that an AI tool made it easy to create the content doesn’t protect you from copyright claims. “The AI did it” is not a legal defense.
What You Should Actually Do
- Use Seedance for original content. Create your own characters, your own worlds, your own stories. This is where the tool shines without legal risk.
- Don’t generate recognizable copyrighted characters. No Spider-Man. No Darth Vader. No Disney princesses. It’s not worth the risk.
- Be careful with celebrity likenesses. The Tom Cruise/Brad Pitt video went viral, but it’s exactly the kind of content SAG-AFTRA is fighting against. Generating recognizable real people without their consent is a legal and ethical minefield.
- For commercial work, stick to original IP. If you’re creating content for clients or for monetization, use only original characters and concepts.
- Add your own voice and audio. Even though Seedance generates audio, consider adding your own voiceover or soundtrack for commercial content. This gives you clearer ownership of the final product. (For tips on this, see our AI voiceover guide for the best tools and techniques.)
The Bigger Picture: AI Video’s “Napster Moment”
What’s happening with Seedance 2.0 is essentially the “Napster moment” for AI video.
Remember Napster? In the late 1990s, suddenly anyone could download any song for free. The music industry panicked, sued everyone, and eventually the market restructured around streaming and licensing deals.
We’re watching the same pattern play out with AI video:
- The technology exists and it’s good enough to be genuinely useful (and genuinely threatening to existing business models)
- The legal framework hasn’t caught up to the technology
- The initial reaction is lawsuits and cease-and-desist letters
- The eventual resolution will be licensing deals between AI companies and content owners
Disney already showed us the future when they signed that three-year licensing deal with OpenAI. The endgame isn’t “AI video gets banned.” The endgame is “AI companies pay for the data they train on, and everyone agrees on rules.”
But we’re in the messy middle right now. And the messy middle means uncertainty for creators.
Expect Tighter Restrictions on All AI Video Tools
This controversy won’t just affect Seedance. It will accelerate restrictions across the entire AI video space.
Every AI video company is watching this play out and updating their legal strategies. Expect to see:
- Stricter content filters that block generation of recognizable characters and celebrities
- Mandatory content ID systems similar to YouTube’s
- Licensing deals between AI companies and studios/rights holders
- Clearer terms of service about what you can and can’t generate
- Possible regulation from lawmakers, especially in the EU and US
The creators who build their workflows around original content will be the ones least affected by these changes. The creators who rely on generating copyrighted characters for views are building on sand.
What Makes Seedance 2.0 Genuinely Important
Step back from the controversy for a moment and look at what this tool represents for creators.
The cost of production just collapsed again. A VFX shot that would cost hundreds or thousands of dollars from a traditional studio can now be generated for under fifty cents. Even if you factor in iteration time and failed generations, the economics are revolutionary.
Audio integration changes the workflow. Not having to separately source, edit, and sync audio for every AI-generated clip removes a massive bottleneck. If you’ve ever spent hours trying to find the right sound effect for a 10-second clip, you understand what a big deal this is.
Multi-shot consistency makes longer content possible. Previous AI video tools were essentially one-shot generators. You could make an impressive single clip, but building a coherent narrative across multiple clips was painful because character appearances would drift. Seedance 2.0’s multi-shot consistency makes short films, product demos, and content series genuinely feasible.
The input flexibility opens up new creative approaches. Being able to feed in reference images, existing footage, audio, and text simultaneously means you can have a real creative dialogue with the tool instead of just typing prompts and hoping for the best.
For creators who are serious about building a content pipeline around AI video, Seedance 2.0 represents a meaningful step forward. Not because it’s perfect (15-second max, 1080p, China-only access for now), but because it solves problems that have been holding back AI video from being a practical production tool.
Limitations You Should Know About
Let’s be honest about what Seedance 2.0 can’t do:
- 15 seconds max per generation. You can extend clips, but each individual generation caps at 15 seconds. Building longer content requires stitching.
- 1080p maximum resolution. In a world where YouTube is pushing 4K, this is a real limitation. Kling 3.0 already does 4K/60fps.
- China-only access (for now). The global launch is imminent but not here yet, and the legal controversy could slow things down.
- Real-person reference feature is suspended. The face-to-voice capability was pulled after the privacy scandal and remains unavailable.
- Guardrails are evolving rapidly. Whatever you can generate today might be blocked tomorrow as ByteDance responds to legal pressure. Don’t build your entire workflow around capabilities that might disappear.
- Training data concerns. The legal actions make it clear that Seedance’s training data likely included copyrighted material. This creates uncertainty about the long-term legal status of content generated with the tool.
Frequently Asked Questions
Is Seedance 2.0 free to use?
There’s a limited free tier that gives you roughly 15 seconds of generation per day. The Xiaoyunque app is currently in a free trial phase with no credits deducted. For regular use, the membership costs about $9.60 USD per month. New signups get around 260 bonus credits. A standard video generation costs 180 credits, and HD costs 240 credits. This makes it one of the more affordable AI video tools, especially considering the audio generation is included.
When will Seedance 2.0 be available outside China?
The expected global launch is through Dreamina (dreamina.capcut.com) around February 25, 2026, with CapCut integration following shortly after. However, the ongoing legal battle with Hollywood studios could delay the timeline. ByteDance hasn’t given a firm global launch date. Keep an eye on Dreamina and CapCut for official availability.
Can I use Seedance 2.0 videos commercially?
This is legally murky territory right now. If you generate original content (your own characters, your own concepts), you’re on much safer ground. If you generate content featuring copyrighted characters or real celebrities, you’re taking on significant legal risk. ByteDance’s terms of service will likely evolve as they respond to legal pressure. For commercial work, stick to original IP and keep records of your prompts and inputs.
How does Seedance 2.0 compare to Sora 2?
They have different strengths. Sora 2 has better physics simulation and more realistic object interactions. Seedance 2.0 has native audio generation, more flexible inputs (12 reference files vs. text-only for Sora), and fewer content restrictions. Sora 2 is heavily moderated, which limits creative freedom. Seedance is less restricted, which is both its advantage and the reason for the legal controversy. For pure video quality, they’re close. For practical creative work, Seedance’s audio integration and input flexibility give it an edge.
Is it safe to use Seedance 2.0, or will I get in legal trouble?
Using the tool itself is not illegal. What matters is what you create with it and how you distribute it. Generating copyrighted characters (Spider-Man, Darth Vader, etc.) and posting them publicly exposes you to DMCA takedowns and potential copyright claims. Generating recognizable celebrity likenesses raises right-of-publicity concerns. Creating original content carries minimal legal risk. The tool is safe. Misusing it is not.
Will Seedance 2.0 replace human filmmakers?
Not yet, and probably not for a long time. What it will do is lower the barrier to entry dramatically. A solo creator can now produce visual content that previously required a team and a budget. But storytelling, direction, editing judgment, and creative vision are still human skills. Think of Seedance as a new camera, not a replacement for the cinematographer. It makes more things possible for more people. That’s democratization, not replacement.
What happened with the face-to-voice feature?
Seedance 2.0 was briefly able to reconstruct a person’s approximate speaking voice from just a facial photo, with no voice samples provided. Chinese tech reviewer Pan Tianhong demonstrated this publicly on February 10, 2026. ByteDance immediately suspended the real-person reference feature due to privacy concerns. It remains suspended. The feature revealed how deeply the model learned correlations from its training data, raising serious questions about privacy and potential misuse.
What Happens Next
We’re watching this story unfold in real time. Here’s what to expect in the coming weeks:
Global launch: Dreamina and CapCut integration should go live in late February, giving creators outside China their first official access to Seedance 2.0.
Legal escalation: The cease-and-desist letters may be just the beginning. If ByteDance doesn’t reach licensing agreements with the studios, actual lawsuits could follow. This would have implications for every AI video company, not just ByteDance.
Tighter guardrails: ByteDance has promised to “strengthen safeguards.” Expect content filters that block copyrighted characters and celebrity likenesses. The tool you access globally will likely be more restricted than what launched in China.
Licensing deals: The most likely resolution is some form of licensing agreement between ByteDance and the major studios, similar to the Disney-OpenAI deal. This would legitimize the tool while giving rights holders compensation and control.
Competitor responses: Expect Sora 2, Kling 3.0, and Veo 3.1 to push updates emphasizing their own audio capabilities. The AI video arms race just intensified.
When Seedance 2.0 hits CapCut globally, we’ll publish a full hands-on tutorial walking you through everything from basic text-to-video to advanced multi-shot workflows. Make sure you’re subscribed so you don’t miss it.
In the meantime, start thinking about what you want to create. Original characters. Original stories. Original worlds. That’s where the opportunity is, and that’s where the legal ground is solid.
The tools are here. The drama is real. And the creators who figure out how to use these tools responsibly and creatively are going to have a massive head start.