"Basic" is a relative term. Modern graphics GPUs do not work the same way memory mapped graphics do, and working with them is different at a fundamental level.
You are probably better off searching for old graphics programming books from the 90s. The code they have likely won't work, the the algorithms should be what you're looking for, and shouldn't be hard to adapt.
Fundamentally different? Don't GPUs just speed things up in hardware?
No. With the old style you had to draw every pixel, and you'd have to develop primitives for drawing a point, a line, or a triangle. With a GPU you essentially give the GPU a bunch of data and tell it to draw points, lines, or triangles for you. You then create "shaders" which are functions that the GPU calls to ask where to position a vertex, or what color to make a pixel, with some "magic" that passes data between the two. It's best understood by looking at the code for the almighty gradient triangle: https://webgpufundamentals.org/webgpu/lessons/webgpu-inter-s...
It's all those parts which, in the absense of GPU, would be done by CPU. No qualitative difference.