Did you know that using GPUs to make processing faster is old technology? Back in the late 80s/early 90's I was working with the flight simulators at Wright-Patterson AFB, They ran on Gould SELs, and right next to the main CPU were a couple of other cabinets, about the same size. They were the co-processors, the I(nteger)PU and the F(floating point)PU. And together they could do math orders of magnitude faster than the CPU. And just like with today's GPUs, feeding them data and getting the results back properly was key to getting the most performance from them.
Similarly, the QUIC standard is looking into header compression to improve network performance. 30 something years ago, using a 286 CPU reading data from a local hard drive, we found that to get maximum performance we needed to compress the data on disk and then decompress after reading. For text files the combination of lower disk IO and more processing gave us ~25% improvement in response time.
The other place you'll see something that appears to be new, but isn't, is when scale gets involved. What's the difference between Kubernetes and a thread manager? Or an operating system managing tasks across physical and hyper-threaded CPUS inside a single computer? Conceptually, not much. Sure the implementation is different, and we've learned a lot about how to make a good API and framework, but at its heart, it's just "run this over there", "Hey is stopped." Put it over there instead".
So the next time someone talks to you about the new hotness, think about its past, what's changed, and what we can learn from people solving that problem before.