Concurrency And Parallelism
Getting computers to do what you want is hard. Getting them to do more than one thing at a time is even harder. In fact, for a given CPU core, it's impossible. But they can do things so fast, and switch between things so fast, that it looks like they're doing more than one thing at a time. That's concurrency. Add in multiple cores and in fact they are doing multiple things at once. That's parallelism. Or at least they might be doing multiple things at once, but you won't know until after it happens. So how do you manage that?
The easiest way to manage it is to not manage it. Make a bunch of independent tasks, then throw them at the threading model/library built into your favorite language/package. In almost all cases it will be better than what you can write. As long as you remember the 2 key things. First, the tasks really need to be independent. As long as they are you don't need to worry about them getting interrupted or having some side effects. Second, If you really are independent and have more tasks than cores it's going to be slower than doing them serially per core.
Why will it be slower? Because switching between those tasks takes time. Time that could have been spent doing useful work. Or, more likely the tasks aren't independent, so you end up doing more work to make them independent or put in some kind of guard code (mutexes, semaphores, etc) to make it safe.
So, when does it make sense to be concurrent? The simplest is when you would need to wait for an external resource. If you make a web request it could take seconds to get a response. Rather than block, do the wait concurrently. If you're reading from disk it's probably faster than a web call, milliseconds are still longer than microseconds, so you might be able to get some work done then.
And this is all on a single computer, where you have a good chance of everything actually happening and not losing any messages. Make it a distributed concurrent system and it gets even harder, but that's a story for another time.