After a few weeks of absence, we can finally present our final work. It was actually finalized late may, however, due to finals and work we could not find the time to write this post until now. For the interested, the final paper can be found here.
Before we get to it, we have to mention some changes we made to the leader-follower model. In fact, when using such an approach, one problem that arises is the fact that agents could, if unlucky, get stuck behind obstacles. A way of avoiding this is to simply let the agents look for the leader at a fixed frequency, storing the position where it was last seen. Now, if the agent looks for the leader, but it is obscured by an obstacle, the agent will move towards the last seen position of the leader. While doing this, the agent will continue to look for the leader, and if it becomes visible again, the agent will resume its path towards it.
Another issue when using a single leader for a group of agents is that all the agents will follow the same point in space, thus forming narrow lines, which is not a behavior one would see in real life. We have tackled this problem by extending the leader into a line and assigning each agent to a node on it. This really improves the way the agents move, however, the naive method of doing this poses some problems. For example, assume that the crowd is moving through an open field towards a gate. The leader itself does not collide with the gate, and so it is able to freely pass through, even if it is wider than the gate itself. The agents that are following the outer parts of the leader will try to follow, but they will of course get stuck as they try to follow the part of the leader that moved through the wall. This is where we got the idea of letting the leader raycast ahead of itself to see if the path is getting wider or narrower, and set its scale accordingly, enabling the agents to actually find the gate. The beauty of this method is that it gives the impression that the agents can see objects such as gates,and move through it in a natural way.
Now, at this point we were happy with how the algorithm worked, and so we started to measure the performance, and we also set up three different scenarios to detect possible emergent behaviors. Unfortunately, we didn’t have time to implement the MIC(0)-preconditioner, and so it is possible to further increase performance in future works.
Results and discussion
In our algorithm, we have two major bottlenecks, the inter-agent collision handeling, and the UIC enforcement. These two are heavily dependent on the grid resolution (the number of cells) – the UIC enforcement benetfits from a lower grid resolution, while the inter-agent collision handling benefits from a higher grid resolution. The figure below shows the total computation time for the complete algorithm. At first, we have really few cells and the collision handling is costly since every agent has to compare positions with almost every other agent. As we increase the number of cells, we get to an optimal point, performance-wise. However, after this point, computation time increases again because of the UIC enforcement.
Another limiting factor is the sole rendering of agents. We have tried three different models when running our simulations: Simple spheres, full 3D models with animation, and animated 2D pictures. If we spawn agents and only move them in random directions, that is, we do not apply anything from our algorithm, we see in the figure below that the time per frame is rising pretty quickly with the number of agents. Even if we use the fastest models, the spheres, we can see in the graph that we are only able to spawn around 6000 agents before the time per frame is comparable with the inter-agent collision handeling at its worst.
In the first video, we had set up a passage where we let two groups of agents spawn and walk towards the opposite side. At low densities, the UIC will not be active, so the agents only step aside when they directly collide with another agent. If we increase the density a bit, streamlines start to appear, as the agents tend to follow other agents that have already paved the way. If we increase the density even more, we can see that the UIC really starts to kick in, and the crowd starts to sway a bit, only to form small vortices. And
if we really increase the agent count, we see some interesting behavior where the green crowd actually tries to go around the inner red crowd. Even though it might have been more natural to form two lanes on each side of the road, our model finds a solution that eventually guides all the agents right.
We also spawned agents in a circle, as we have shown in this blog before, and let them walk towards their diametrically opposite positions. As the agents converge at the circle center, outwards pressures arise to counteract the inwards motion. This results in a vortex being formed, causing the agents to revolve around each other until the crowd is finally dissolved.
Additionally, we used a model of the KTH-building where we spawned agents at three different positions and guided two groups through the main gate, while one group walked towards the subway, which in this case is outside the camera range. Initially, the agents simply walk right past each other as the density is quite low, but as we gradually increase the density, we see that vortices start to appear in order to let the agents through. Then, just because we can, we pushed as many agents as we could fit in the area to see what would happen, and it actually solved it in a way, however unnatural it may be.
Finally, we want to thank our supervisor, Christopher Peters, for all the invaluable advice and help he has provided!