Fundamentally, for OpenGL, "getting shaders" meant moving from a fixed, built-in set of graphics effects to giving developers custom control over the graphics pipeline.
Imagine you hired a robot artist to draw.
Before Shaders (The Old Way): The robot had a fixed set of instructions. You could only tell it "draw a red circle here" or "draw a blue square there." You could change the colors and basic shapes, but you couldn't change how it drew them. This was called the fixed-function pipeline.
After Shaders (The New Way): You can now give the robot custom, programmable instructions, or shaders. You can write little programs that tell it exactly how to draw things.
The Two Original Shaders This programmability was primarily split into two types of shaders:
Vertex Shader: This program runs for every single point (vertex) of a 3D model. Its job is to figure out where that point should be positioned on your 2D screen. You could now program custom effects like making a character model jiggle or a flag wave in the wind.
Fragment (or Pixel) Shader: After the shape is positioned, this program runs for every single pixel inside that shape. Its job is to decide the final color of that pixel. This is where you program complex lighting, shadows, reflections, and surface textures like wood grain or rust.
What about bump mapping, where's that done? That's a texture that changes the geometry.
That's usually a job for the fragment shader.
It doesn't change the geometry, it just changes the lighting to give that appearance.