Mostly it's for accelerating heavily parallelizable tasks through use of the GPU's "Cores" (CUDA cores on Nvidia or compute units / stream processors with AMD). e.g. GPU Rendering. GPUs nowadays have thousands of Cores (a RTX 3090 has over 10000 Cores), while CPUs have "only" up to 64.
Of course such a gpu core is quite weak vs a cpu core, but because they are so specialized they are quite good at doing a small number of tasks very fast, while cpu cores are good at doing all kinds of things, but rather slowly in comparison.
That's compute.
Apart from all those cores that are represented at a hardware level, gpus can be programmes through modern APIs e.g. CUDA / OptiX that then access these hardware features (RT cores, Tensor Cores etc.). Same on AMD's side.
Of course there are some other things GPUs do, that older GPUs did as well (before all of the cores started showing up), like monitor display, openGL computation, and so on.
That's as far as my knowledge goes... But I'm no GPU expert by any means.
Because most of what you do in CGI is based on stuff you can split up: pixels, frames, buckets, render layers, compositing layers... CGI has a lot of areas that can make use of the parallel nature of GPU Cores.