This week we have worked on tweaking a lot of components in our algorithm and we have imported the new models for the agents. Now, we actually have real people walking around instead of the spheres we have used previously, and it certainly looks better. We have decided to use the impostors when running the application in real-time but to render all the characters in 3D when rendering videos, for better visuals.
On top of this, we have imported the KTH-scene which proved to be a bit trickier than expected. We had to actually change the way we handle agent movement and new colliders had to be introduced. Now, we can allow the agents to move on arbitrary terrains and everything is simply projected onto our 2D-grid where all the calculations are carried out as before. Everything works as expected, but there are some issues regarding collisions between agents and buildings that need to be addressed.
The global planning algorithm that we created has also been improved to coincide better with real human behaviour. The invisible leader moves towards the current target node but is always looking for the next node in succession by simple raycasting. As soon as it can see the next node it starts moving towards that node regardless of its progress on the way to the previous node. Basically, what happens, is that the crowd now cut corners and take the shortest path.
As can be seen in the video above, some agents are glitching a bit. The strange thing is that it is only present when we render videos and not when we run the scene in real-time. We will look into that, along with all the other improvements that need to be made. So the next few days will be devoted to making the scene run perfectly and after that, if possible, we will try to implement the preconditioner.