Getting more out of our CPUs
theoryStop the reliance of GPUs, computers didn’t have them in the early days. But we now seem to rely on a lot of graphics processing. And then somehow decided to use these GPUs for compute over optimized code/algorithm processing with the CPU. For example you can use the Mx chipes on the Apple machines for encoding video at 1000fps if using the right software that takes advantage of raw silicon on the chip to process video frames faster. If more software took advantage of these approaches of using the optimized silicon then a lot of the computer would be faster.
Another example of this is the timing clock on how the computer works. All operating systems use a method of tick to then run it’s next instruction. Mainly because the x86 processor instruction table worked this way. But today, we have multiple cores and would need some form of instruction set programming model in which we could run multiple of these clock tables or even a different tick/clock based compute.