Final Update!

After a few weeks of absence, we can finally present our final work. It was actually finalized late may, however, due to finals and work we could not find the time to write this post until now. For the interested, the final paper can be found here.

Before we get to it, we have to mention some changes we made to the leader-follower model. In fact, when using such an approach, one problem that arises is the fact that agents could, if unlucky, get stuck behind obstacles. A way of avoiding this is to simply let the agents look for the leader at a fixed frequency, storing the position where it was last seen. Now, if the agent looks for the leader, but it is obscured by an obstacle, the agent will move towards the last seen position of the leader. While doing this, the agent will continue to look for the leader, and if it becomes visible again, the agent will resume its path towards it.

Another issue when using a single leader for a group of agents is that all the agents will follow the same point in space, thus forming narrow lines, which is not a behavior one would see in real life. We have tackled this problem by extending the leader into a line and assigning each agent to a node on it. This really improves the way the agents move, however, the naive method of doing this poses some problems. For example, assume that the crowd is moving through an open field towards a gate. The leader itself does not collide with the gate, and so it is able to freely pass through, even if it is wider than the gate itself. The agents that are following the outer parts of the leader will try to follow, but they will of course get stuck as they try to follow the part of the leader that moved through the wall. This is where we got the idea of letting the leader raycast ahead of itself to see if the path is getting wider or narrower, and set its scale accordingly, enabling the agents to actually find the gate. The beauty of this method is that it gives the impression that the agents can see objects such as gates,and move through it in a natural way.

Now, at this point we were happy with how the algorithm worked, and so we started to measure the performance, and we also set up three different scenarios to detect possible emergent behaviors. Unfortunately, we didn’t have time to implement the MIC(0)-preconditioner, and so it is possible to further increase performance in future works.

Results and discussion

Performance

In our algorithm, we have two major bottlenecks, the inter-agent collision handeling, and the UIC enforcement. These two are heavily dependent on the grid resolution (the number of cells) – the UIC enforcement benetfits from a lower grid resolution, while the inter-agent collision handling benefits from a higher grid resolution. The figure below shows the total computation time for the complete algorithm. At first, we have really few cells and the collision handling is costly since every agent has to compare positions with almost every other agent. As we increase the number of cells, we get to an optimal point, performance-wise. However, after this point, computation time increases again because of the UIC enforcement.

compTime-eps-converted-to-1

Another limiting factor is the sole rendering of agents. We have tried three different models when running our simulations: Simple spheres, full 3D models with animation, and animated 2D pictures. If we spawn agents and only move them in random directions, that is, we do not apply anything from our algorithm, we see in the figure below that the time per frame is rising pretty quickly with the number of agents. Even if we use the fastest models, the spheres, we can see in the graph that we are only able to spawn around 6000 agents before the time per frame is comparable with the inter-agent collision handeling at its worst.

AgentRend

Emergent behavior

In the first video, we had set up a passage where we let two groups of agents spawn and walk towards the opposite side. At low densities, the UIC will not be active, so the agents only step aside when they directly collide with another agent. If we increase the density a bit, streamlines start to appear, as the agents tend to follow other agents that have already paved the way. If we increase the density even more, we can see that the UIC really starts to kick in, and the crowd starts to sway a bit, only to form small vortices. And
if we really increase the agent count, we see some interesting behavior where the green crowd actually tries to go around the inner red crowd. Even though it might have been more natural to form two lanes on each side of the road, our model finds a solution that eventually guides all the agents right.

We also spawned agents in a circle, as we have shown in this blog before, and let them walk towards their diametrically opposite positions. As the agents converge at the circle center, outwards pressures arise to counteract the inwards motion. This results in a vortex being formed, causing the agents to revolve around each other until the crowd is finally dissolved.

Additionally, we used a model of the KTH-building where we spawned agents at three different positions and guided two groups through the main gate, while one group walked towards the subway, which in this case is outside the camera range. Initially, the agents simply walk right past each other as the density is quite low, but as we gradually increase the density, we see that vortices start to appear in order to let the agents through. Then, just because we can, we pushed as many agents as we could fit in the area to see what would happen, and it actually solved it in a way, however unnatural it may be.

Concluding remark

Finally, we want to thank our supervisor, Christopher Peters, for all the invaluable advice and help he has provided!

Update 9

This week we have worked on tweaking a lot of components in our algorithm and we have imported the new models for the agents. Now, we actually have real people walking around instead of the spheres we have used previously, and it certainly looks better. We have decided to use the impostors when running the application in real-time but to render all the characters in 3D when rendering videos, for better visuals.

On top of this, we have imported the KTH-scene which proved to be a bit trickier than expected. We had to actually change the way we handle agent movement and new colliders had to be introduced. Now, we can allow the agents to move on arbitrary terrains and everything is simply projected onto our 2D-grid where all the calculations are carried out as before. Everything works as expected, but there are some issues regarding collisions between agents and buildings that need to be addressed.

The global planning algorithm that we created has also been improved to coincide better with real human behaviour. The invisible leader moves towards the current target node but is always looking for the next node in succession by simple raycasting. As soon as it can see the next node it starts moving towards that node regardless of its progress on the way to the previous node. Basically, what happens, is that the crowd now cut corners and take the shortest path.

As can be seen in the video above, some agents are glitching a bit. The strange thing is that it is only present when we render videos and not when we run the scene in real-time. We will look into that, along with all the other improvements that need to be made. So the next few days will be devoted to making the scene run perfectly and after that, if possible, we will try to implement the preconditioner.

Update 8 – Prototype 3 Completed

During the last couple of days we have worked on developing the global planning algorithm which is to be used for navigation. We started off by testing the free version of the A*-algorithm, however it proved to be really inefficient in our implementation. We also read in some papers that much simpler global planning algorithms can be much faster for really large crowds. A decision was made to simply make our own algorithm from scratch, were we chose a simple roadmap approach.

Our roadmap is basically a graph structure consisting of nodes, strategically placed, that are coupled together bilaterally. Upon planning a path we do a simple node-to-node breadth-first-search and store the traversed nodes in a stack, representing the path. In order to reduce computational cost, we introduce an invisible leader for every group of agents with the common target position. The leader is the only object that carries out the path planning and thus it is the only one actually following the path. All the agents then follow this leader and so in a sense they plan the path as a group.

One problem with this approach is that when agents fall behind too much, there is a possibility of getting stuck behind obstacles when following the leader blindly. We solve this problem by having every agent raycast to the leader and store its position as long as it is visible. When an obstacle gets in the way, it is detected by the raycasting and thus the agents follow the last seen position instead of following the leader directly. As soon as the leader is spotted again, it is being followed as usual.

Below is a video of how the algorithm works. The roadmap, with both its nodes and edges, is drawn and so is the calculated path. The leader is shown by the larger object for illustrative purposes.

Now that the last prototype is completed, what remains now is to optimize the code, primarily to implement the MIC(0)-preconditioner for the MPRGP-algorithm, and to import the new models and scenes provided in order to get a nice presentation. After that, given that we have some time left, we will try to extend this project in some cool way.

Update 7 – Final presentation in SI2335, Simulation Physics

Using this degree project as our final project in SI2335, Simulation Physics, turned out to be a great success and it can not be stressed enough how much time and pure workload we saved in doing this. We finished the course with an outstanding presentation and also made a lot of progress in this project.

During the days leading up to the final presentation, which was held on tuesday this week, a lot of effort was put into refining the algorithm we already had and creating scenes in which we could simulate different scenarios. In addition to that, we worked quite a bit on the report that was, of course, a part of this project and we realised, upon its completion, that we now had our first draft of the report associated with this degree project.

The simulations we ran are shown in the videos below and the results are quite interesting. Something that hits you right away when looking at these videos is that the movement of our crowds show striking similarity to fluid flow, which would be expected considering the method we are using. You may also notice that things seem to be running pretty smoothly now. It is, unfortunately, not because the algorithm has been optimized but because we render the videos in a different way now. Instead of recording the runs with a screen recorder we save each and every frame as png-files, by simple scripting, and use an external software to turn these individual frames into videos. The benefit is that we can produce videos at a fixed frame rate, of our choice, regardless of the frame rate at runtime. The result is quite pleasing.

After the presentation, we decided to take a break for a couple of days and so we have not done anything since. Although, the project is resumed with the submission of this post.

Update 6 – Prototype 2 Completed

We have made a lot of progress the last two days and now we actually have something that is presentable. A lot of improvements have been made to the algorithm, addressing the problems we described before and these improvements are mainly bug fixes. Not only do we now get results that would be expected when solving for the UIC, we have also managed to improve performance by almost 100%. The sudden gain in performance stems from the fact that we recently discovered a severe blunder in our matrix representation in the LCP-solver. The matrix involved in the LCP is in fact a sparse one. Naturally, we thought of defining it as a sparse matrix from start, but somewhere along the way we messed up and it became a dense matrix instead. Upon later changing the definition from dense to sparse, we went from 18 FPS to 32 FPS, a quite distinct improvement.

Apart from this major bug fix, we fixed some other small errors that were found and now the agents move in a much more consistent way. It certainly looks like we are getting closer and closer to our final goal. One of the issues still present, though, is the fact that the result is very sensitive to parameter change, something we will look into later. Another thing, is that we have to implement a preconditioner to further improve performance of the LCP-solver. Other than that, the UIC-implementation seems to work as intended and thus prototype 2 is completed.

Finally, some new features have been added. First of all, we have added color coding to the agents which makes it a lot easier to track the movements of groups of agents. In addition, much earlier than expected, obstacles are now here! We now have the ability to place arbitrary obstacles on the ground and the coefficients describing area availability in the covered cells are set automatically, thus we get the boundary conditions around every obstacle. The coefficients later go into the LCP and will result in agents avoiding the affected cells. All of the improvements made and all of the added features are summarized in the two videos below.

We clearly see a spiral in the movement of the agents. Note that it is only due to the fact that we are correcting the velocities by solving the LCP. Pretty cool isn’t it? In the next video some obstacles are introduced. We have placed walls around the edges of the plane to create a neat border and a block is placed right in the middle. The cells that are unavailable are shown in red.

Here, we can clearly see that the agents avoid entering the unavailable cells for the most part. Note, however, that a few agents still tend to enter at points where the density is lower. This is not a problem though, because we do handle collisions between agents and the obstacles in pretty much the same way they are handled among agents themselves.

To conclude, we have now come a long way since we first started off and we are pretty happy with how the agents behave at high densities. There are, of course, still some issues that we will need to solve but that is something we will tackle later. Now, we are focusing on the upcoming presentation of our final project in SI2335, Simulation Physics, where we will need to show a scenario where we simulate a dense crowd and record some data from that. This is our primary objective right now and therefore we will continue to improve the algorithm afterwards.

Update 5

Now that the exams are over we have resumed working on our algorithm. Mainly, we have gone through the code, directly involved in solving for the UIC, to make sure that every operation is carried out correctly. Some errors have been discovered and fixed, but we are certain that there are more parameters, affecting the result, that need to be dealt with. The good news are, though, the fixes we have made have greatly improved the behaviour of the agents. At least, now, we get results that are logical, that we can actually understand.

In the video below we run the same scenario as before, with the agents spawned in a disc and their target positions at the diametrical opposite point. The green arrows indicate the velocity component that is a result of solving for the UIC.

We clearly see that we get an initial spiral taking form, something that we would want. However, it vanishes quickly and turns into an outwards motion. When the agents have moved a certain distance out from the middle, the preferred velocity takes over, and steers them towards the middle again. This is simply a result of the changing density, and thus then changing pressure gradient, in the middle. It continues in this fashion for a while until all the agents suddenly start to move in the same direction, something that we can not really explain.

Update 4

At this stage, we have implemented the algorithm for solving the LCP-problem arising when applying the UIC. Further, we have made an attempt of actually solving the problem and correcting the agent velocities accordingly. The results, however, do not match what would be expected. The video below clearly illustrates the strange behaviour that is a result of our implemented method.