..Two minute papers could not have put it better
I was watching the video on the Two minute paper page. The red characters trying to get the blue characters. The blue characters would run away. They would run into a room then block the entrance with large cubes. This would keep the red guys out. Nothing left to do. After so many iterations of the same results, the red guys figured out that they could use a ramp that was conveniently placed off to the side to go over the wall to get into the room the blue guys were in. Think about that. The red guys didn't just move the ramp then use it. They needed to know that the ramp was movable. They needed to have an understanding of elevation and gravity and possibly friction. Either they were programmed with some basic knowledge of engineering or, at least some life experience such as climbing a hill or steps or standing on a block or they would not have been able to figure out that using the ramp would get them into the room where the blue guys are. They would have had to give up, go away and onto another adventure. maybe in one of their adventures they would learn about the concept of going up and over instead of through a doorway, then they could go back to the blue guys and use the ramp to go over the wall to get them.. AI must start with some basic information and skills. The red guys already knew how to walk but they didn't know how to make use of the ramp. That took time. how did they figure it out. Even if they just decided to give it a try with no expectations of success, why didn't they try it on the first iteration? What did they learn that suddenly made them try? I guess this answer, plus so many more, are what make AI the new technology that it is rather than simply computer programming with more predictive instructions....
I love this shit.
Yes, what you're describing is the really interesting part of AI research. It's called emergent behavior, and it's one of the things I'm working on incorporating into filmmaking. I can put some deer in the forest for the background of a shot, and that looks good, but with a simple AI I can have them grazing around, looking for patches of fresh grass, etc. However, with a more complex AI, the deer can get in fights, avoid certain other deer, hide behind trees, and so on, and once the brain has goals, methods, and experimentation capabilities, that same background can take on a life of it's own, and suddenly, I don't know what's going to happen with my background extras, and you can get all these interesting behaviors "emerging".
It's interesting to note that if you watch the BTS documentaries about the making of HBO's OZ, they were directing this same way, with humans. All the actors in the prison were told to be in character and going about the normal day to day of prison life whether the camera was on them or not, and it increased the sense of realism a great deal, because the camera would be following two people talking, and in the background, you would always see people eating lunch or fighting or praying or playing cards naturally. With CG every action of every character is typically choreographed, so it gets labor intensive to have a crowd in the background doing hundreds of animation. This is essentially what the much lauded monster army system weta digital pioneered for the LOTR films was for. This type of thing has been in video games for years, but the time is now right to just bridge these concepts into film use. Soon we will see amazing worlds teeming with life that surprises even it's creators.
There is a lot of this technique visible in this trailer, with all these animals and fish imbued with slivers of AI and then filmed. There is no pose by pose animation being done here, this was all filmed inside an engine where the behavior of the character and fauna is the output of an AI controlling a blueprint that coordinates the animations automatically.
Right now all I've implemented so far are crowds of people, stopping at intersections, reading newspapers, taking selfies, etc. Also some birds and fish are up and running, but the really advanced stuff is yet to come. At some point in development, if I need a swat team to breach a house, I can just automate that by giving them the goal, and watch the scenario play out 20 different ways until I have the version I want to shoot for the film. End users won't be able to see the quality difference, because they don't have the other scenes to compare it to, but overall it should be very noticeable, in terms of overall scope and quality that's possible on budget x.