So now we know that threads can access each others’ results through shared and global memory. This means they can work together on a computation, but there’s a problem. What if a thread tries to read a result before another thread has had a chance to compute or write it? This means we need synchronization. Threads need to synchronize with each other to avoid this situation. This need for synchronization is really one of the most fundamental problems in parallel computing. Now the simplest form of synchronization is called a barrier. A barrier is a point in the program where all the threads stop and wait. When all the threads have reached the barrier, they can proceed on to the rest of the code. Let’s illustrate this. Here’s some threads and they’re all proceeding along through to code. I’ll draw them in different colors, and I’m also drawing them different lengths so that you get the idea that they’re at different places in the code. They’re at different points in their execution of the program. The idea is that when they reach the barrier, they’re going to stop and wait for all the others to catch up. So in my drawing, the red one reaches the barrier first and stops. In the meantime, the blue one is proceeding along, and the green one is proceeding along, and eventually the blue one arrives at the barrier and stops, and the green one is the last one to arrive at the barrier and stops. And now all 3 threads–in my example, say I only have 3 threads. Now that all of the threads have arrived at the barrier, then they’re all free to go again. And so they’ll all proceed again, and we don’t actually know which one’s going to go first. It might be that the blue one is the first out of the gate, maybe green is next, maybe red is last. So let’s look at some code to illustrate this.