This is probably gonna make me sound like a Zillow shill, but here goes… Do people realize that algorithms such as the Zestimate operate in aggregate over a large swath of data (properties)? Their goal is to get as close to possible, on average. There are going to be outliers in every such algorithm (in fact, it’s statistically expected), and those individual data points being off don’t necessarily invalidate it. The real performance metric is how well it predicts sale prices overall, for a large number of properties in a large number of metro areas. Take issue with it on that basis, if you like (I personally don’t know of any data like this, but perhaps others do).