• Popcorn Points determine how popular a video is. You can click the popcorn bucket or simply react (Like, Love, etc.) and it will register a vote.

Save Point 7.4 pipeline update


Some glimpses at the 7,4 tech of the Save Point pipeline. As it gets more complex and heavy (computationally) updates have been slower.

Main takeaways here are a more solid animation format with much less "crawling lines" motion.

In this version, I'm starting to split the pipeline into individual branches that handle character, prop, and background separately, and composites them.

The result is a process with independent control for each character, finally allowing me to actually tell a story as opposed to simply creating compelling visuals. Obviously, this is an extremely important step in moving towards a release product.

I said a while back that I was just going to go ahead and start making the show, with the underwhelming tech I had at the time, because I was simply impatient. This has taken a long time. I decided though that I was too close to actually achieving the real thing, as originally intended, to trash the master plan out of impatience. I'm glad I held off. I think this summer I'm going to have a much better format, and can begin making films that a lot more people will enjoy.

Stuff I'm working on for 7.5 includes rebuilding the lipsync pipe to work with the new visual pipes, a final refinement layer that takes the composited image and recombines it into a single drawn image per frame, and integrating more intelligent AI scoring.

Oh yeah, this demo used simple AI scoring also, and all music moving forward will be entirely unique to the project. I got tired of hearing my scoring tracks for emotional scenes pop up in youtube ads for outdoor grills.
 
Last edited:
Upvote 3
I actively disliked the score, but everything else looked cool.
I look forward to the day when one of these has some dialogue and a story.
 
I actively disliked the score, but everything else looked cool.
I look forward to the day when one of these has some dialogue and a story.
Yeah, me too. Basically, until all stars align I can't do narrative fiction, which is of course the whole point, and the only thing of value.

For a while I've had solid visuals of one type or another, but in motion the AI stuff was really rough, and it's taken forever to get things working at a level anyone would watch. Not quite there yet, but much closer than ever before.

Music was plentiful since day one, like a thousand available tracks per scene filmed.

Lipsync has been up and running 3 times, but is worthless until it can track head movement perfectly. That's one of the things I'm working on now. Automatic rotoscope is now working as of this version, but flickers so often that I have to throw out most of the output. So for that reason it's still too slow to do the "maiden voyage" project.

The first day it all comes together, which I think will be this summer, I'll immediately take some public domain film, such as "Robinson Crusoe" (1939) and transform it into a full length animated feature. Once the pipeline truly works perfectly, that's only a one week job.

Computational cycles have gotten long as I add tons of new math into the pipe, and last week I ordered a new machine that's almost twice as fast as the current one. I'll start running both, for a 3x total speed vs current research and production. To give you an idea, developing the method, look, etc for individual frames eventually got down to 5 seconds per iteration, and when I moved to getting video smooth with full animation, the minimum per test was about 10 minutes. I have a tech that does amazing work, that I can't use because every second of footage takes 8 hours, and half of them are bad outputs. What I built there does work, but is too slow for a person without true supercomputing power.

I occasionally use server farms via net to speed things up, but it's quite expensive vs my available cash, so that's a limited option. The good thing is that once it finally starts making money, I can instantly multiply speed by hundreds of times if needed. It's mainly going so slow from a complete lack of financing. There are tiny algorithms inside the pipeline developed by university research labs, and if you look up the financing for that one sub node, they have 20 people with millions of dollars in funding, on a network of university supplied computers. I can compete on 20 bucks a day, which is surprising, but it's very slow, and I spend a ton of time working out ways to be efficient, which of course detracts from the amount of time I could spend making progress.

Somewhere in version 8 I should be able to start telling stories, and I'm extremely anxious to get there, considering that it was the entire point from day one. It's been fun working with advanced tech development, and learning so many new things, but ultimately, I'm a filmmaker, and I just want to tell a story in a format that is marketable.
 
I'm starting to split the pipeline into individual branches that handle character, prop, and background separately, and composites them.

The result is a process with independent control for each character, finally allowing me to actually tell a story as opposed to simply creating compelling visuals.

Definitely a major improvement on the previous versions. The characters are really starting to look now like "independent actors" within the scene, rather than simple component parts of an animated background.
 
Lipsync has been up and running 3 times, but is worthless until it can track head movement perfectly. That's one of the things I'm working on now. Automatic rotoscope is now working as of this version, but flickers so often that I have to throw out most of the output. So for that reason it's still too slow to do the "maiden voyage" project.

Microsoft just Announced this lipsync stuff.... pretty damn impressive

 
I meant to post this previous video earlier, in case anyone was interested. This video is from the 7.3 version, directly preceding the one at the top. This one shows a lot more 3d camera movement, and you can see between the two how the newer method produces a more consistent image.

7.3 version


7.1 version


These 3 videos together (including the one in the first post) show progress across 3 months. You'll notice that there's a big difference between outputs over this relatively short period.

There's a few scenes in this older one where I did not turn on stylization layers, and you can see closer to photorealistic output, in example the cat walking down the log in the jungle. There's been some confusion on other sites, (understandably) about whether this pipeline creates cartoon or real life output. Here's what's going on. In terms of what I have to do to make it work, there's about 97% overlap between the two types. Animation style does a few things for me that have big advantages. One is just giving the engine (pipeline) a few pixels of leeway to work (inside the black lines). Another, more significant factor is simply how people respond to hand drawn animation style vs even very good CGI. 200 million dollar cgi movie that looks almost photoreal? People hated it and it nearly crashed a division of Sony one year. Uncanny valley. Line art animation at 10% of the budget? A hundred films that people still love to this day. I can't exactly tell you why, and I do understand that there is a lot more up front visual impact to the more realistic look, but I think for simply telling a story in a way people enjoy, line art will likely be the superior format until I can reach 100% parity with camera footage.

Sora is a different branch of the same technology, that does not allow direct control other than text, so while it looks amazing, you couldn't create anything but a nature film or documentary with it. That said, once they send me my access pass to Sora (I applied for early access day one) I'll probably make a short film or two just for fun.
 
Definitely a major improvement on the previous versions. The characters are really starting to look now like "independent actors" within the scene, rather than simple component parts of an animated background.
Thanks! It gives me a stronger ability to direct viewer attention to where I want it onscreen, which is really important. In addition, the way this whole thing works, there was a strong tendency for character movements to cause ripple effects on the background drawing (you can see it in that 7.1 video) and inadvertently transfer animation where it shouldn't be. There's actually some aspects of it that I like (seen in video 7.3) that create what feels like a more living, breathing scene. In the next iteration, what I'll do is create a stable hybrid of the methods, where I can dial in maybe 15% breathing, as opposed to having it either on or off, as seen in the 7.4 and 7.3 videos respectively.
 
Microsoft just Announced this lipsync stuff.... pretty damn impressive

I took a look at it, and it looks good. Microsoft doesn't typically open source it's research though, so if I'm going to use this method in engine, I'll have to chase down the academic research papers they derived this from, and rebuild a similar implementation myself. Of course first I'll look to see if someone already did that, and open sourced it. Happens a lot in this context. If I can get something like this solution up and running in my build, I can see that first demo movie happening this year. (doing a full PD film or episode in a new visual style, watchable as a normal narrative show)
 
I took a look at it, and it looks good. Microsoft doesn't typically open source it's research though, so if I'm going to use this method in engine, I'll have to chase down the academic research papers they derived this from, and rebuild a similar implementation myself. Of course first I'll look to see if someone already did that, and open sourced it. Happens a lot in this context. If I can get something like this solution up and running in my build, I can see that first demo movie happening this year. (doing a full PD film or episode in a new visual style, watchable as a normal narrative show)
Its mainstream enough that someone will do it
 
Its mainstream enough that someone will do it
Sure but the big question is when rather than if, If I can't find an open source project almost immediately i'll likely have to try and proceed along a similar technology track simply because of the current obsolescence curve, which is completely insane. If you told people 100 years ago that you could be building an invention and go out to the store to get a part for it and have it be obsolete and worthless by the time you got back to your house they would not have believed it could ever happen.

In some ways that's a slight exaggeration of what's going on here but if you look at the long timelines and adjust the scale it's kind of not an exaggeration. People used to come up with a new idea and have it be valid for 20, 30, 40 years before becoming obsolete. With AI research in 2024 that window is honestly around three to 6 months. At this point I'm pretty much convinced that the main benefit I'll eventually get from building this pipeline is simply that I will own it outright not be subject to any corporations fees or rules and of course the benefit of understanding the entire system down to the last wire which I think is very important in terms of creating something with a signature style.
 
Last edited:
Back
Top