Just tested and while it seems interesting, there doesn't seem to (yet) be any intelligence about the imagery itself from what I can tell. For example, it can give me insights about vegetation data overlayed on a map (or try to), but it can't "find the most fertile grassland in this radius".
When there is a way to actually "search" satellite images with an LLM, it will be a game changer for businesses (and likely not to the ultimate benefit of consumers, unfortunately)
How would you even define “most fertile grassland”? What does “fertile” mean - soil nutrients, water availability, or productivity for a specific crop? And what counts as “grassland”? Are you talking about a 1 acre parcel, something for sale, or land next to a road?
There’s already data for all of this: SSURGO soil maps, vegetation indices, climatology datasets, and more — that could help you find the “most something” in a given radius. But there are too many variables for a single AI to guess your intent. That’s not how people who actually farm, conserve, or monitor land tend to search; they start from a goal and combine the relevant data layers accordingly.
In fact, crop-specific fertility maps have existed for decades, based on soil and climate averages, and they’re still good enough for most practical uses today.
It was just an example, but you are correct. A more "imagery required" example would be "Find all the houses with roofs that have been damaged in the last 6 months" or something like that which could be used by salespeople or insurers
That's a good example, yes. I think this one can actually be interpreted by multiple AI agents to do search on the algorithms, or could even train a model, and then run the model. How amazing would it be, if this could actually all happen based on a few prompts :)