I didn't study too much meteorology in undergrad, but one thing impressed upon us is that any forecast beyond maybe 3 days is basically guesswork.

I think what might be getting observed here is that when forecasting that many days out, the local data becomes so unimportant to the model's outcome that the model is just reflecting historical climate trends. Which kind of makes both the same kind of model. Ie. when forecasting tomorrow, the current temperature and pressure data really makes a difference. But once pushed to 7 days, those data essentially become a proxy for typical weather at that time of year, possibly down-weighted by a lot.

I just woke up and I feel like I'm doing a very poor job trying to describe this.

I think you are describing it pretty well, and I've noticed the same thing. The farther into the future the forecast goes, the higher the probability is that it will look like the historical average.

One thing that I've found to help a lot is to go to weather.gov and look at the "forecast discussion". Often it will help to understand what types of uncertainties exist within the forecast.

It isn't unusual to see notes that make it really clear that 24-48 hour variations are expected, or that massive differences will exist based upon hard to predict variables. "Hey we think it will rain heavily as far south as X, but actually it might end up staying north of Y in which case X will stay dry"

It is easy to see how hard it can be even if the forecast itself turns out to be fairly accurate at a high level.

That depends on what you care about. Will it rain at a specific date/time - getting that for tomorrow is hard, much less a year. However you can often predict if this will be a wet or dry year with reasonable accuracy and that is important information (farmers plant different seeds). I doubt their model is very good at this, but science can do well enough.