The researcher says

> this strange strategy will maximize your profit. “To me, it was a complete surprise”

It doesn't seem like such a surprise that algorithms that use information about rivals to optimising profit tend to price high.

Consider a small town with two gas stations, you own one. You can set the price (high or low) in the morning and can't change it until the next day. Your goal is to optimise profit for the next 1000 days. On day one you price high (hoping your rival will). But your rival prices low and wins lots of business. On day two, you price high again (hoping your rival will have seen your prices and cooperate). If your rival prices high, you both stay high for the most of the next 998 days (there's some incentive to 'cheat' and price low, but that is easily countered by the rival pricing low). If your rival priced low on day 2, you have to start pricing low too. But occasionally you'll price high to try to 'nudge' your rival to price high to avoid low-low. If they eventually understand, you can both price high for the rest of the 1000 days. Critically, even if stuck at the low-low equilibrium, you'll keep trying to 'nudge' high periodically. The frequency with which you try to 'nudge' will depend on the ratio of profit for high-high vs low-low. If you both make extreme profits when pricing high-high, you have more incentive to 'nudge', but if the difference isn't great, you won't nudge as often.

Seems obvious pricing high will be attempted in proportion to the reward relative to pricing low.

The researchers' conclusion seems reasonable:

> it’s very hard for a regulator to come in and say, ‘These prices feel wrong’”

and

> what can regulators do? Roth admits he doesn’t have an answer.

(i.e. in practical terms, there's no way regulators can police what algorithms sellers use - I can't think of exceptions to this, but perhaps there are some special cases)

Regulators could ensure that detailed financial data of companies is public. If everybody understands how much profit and opportunities are in a certain thing that will encourage other people to do the same thing.

I always think that in this day and age financial secrecy benefits mostly the richest people and adds to the informational imbalance (which does not help even the model of free markets).

I agree, but its much more complex than just forcing companies' books to be open to the public. There are all kinds of accounting tricks you can pull with complex constellations of "entities" (the jargon used by tax dodge experts for the fake companies they set up). IMO we have to retreat away from a world where anyone with a couple hundred dollars can create a corporation by filing a form. Corporate personhood should be a privilege that is granted specifically by a democratically-elected government for well-delineated purposes and subject to revocation if the public trust is betrayed. This in turn obviously means that we need to establish (or re-establish, in places) democratic control of the government and, unless we do this all at once everywhere (very tricky) also massively reduce the amount of cross-border capital flows to the point where they can be reasonably understood and regulated by these domestic democratic governments.

All of this is a tall order, but there's no shortcut to establishing, re-establishing, or maintaining a democracy.

I think the idea of financial transparency should be more discussed at any level. Yes, there will be loopholes, but now the default is "money is secret, how dare you!".

I would claim that democracy was an ideal at any point in time. Most people have/had insufficient education to understand all the topics. Even in more advanced countries (with better education on average) the discourse gets focused on petty issues. The societies that will be able to focus on the longer term will be the next centuries winners!

To be clear: I agree. There is no reason that citizens shouldn't be able to inspect every document produced by their government, listen to any conversation about any topic that involves any officials, and no reason not to extend this regime into the so-called "private sector", which is -- legally and historically -- a creation of the state, not the other way around.

If you don't like that intrusion into your finances, you are still free to do business using your own personhood, but the public won't provide you with a spare disposable one.

> (i.e. in practical terms, there's no way regulators can police what algorithms sellers use - I can't think of exceptions to this, but perhaps there are some special cases)

Regulators can already police the data used as inputs in decision-making in industries like insurance, so policing the algorithms that operate on that data doesn't seem like too much of a reach.

> Regulators can already police the data used as inputs in decision-making in industries like insurance

How enforceable is policing which data can be used as inputs though?

It's common for insurance companies to price based on age and sex (e.g. teenage boys will typically pay higher car insurance premiums than similar aged girls). Presumably insurers are not allowed to price on a factor such as race. Unlike collusion, overt use of a variable like 'race' in a pricing model could be detected and enforced via a company whistleblower.

But how would a regulator find/prove algorithmic collusion?

In an extreme case, regulators could ban all use of competitors' data in a sellers' pricing models. But that seems extreme and unproductive since it could stop price wars (downward prices), as well as muting good effects of the 'invisible hand' (higher prices attracting more market entrants and greater investment)

I work in insurance, but not specifically in-depth on regulated insurance rates like personal auto.

That being said, I can add some insight. Most state insurance regulators require a company to justify the rate they're charging based on actual claims data (i.e. you wouldn't be allowed to use a competitor's pricing as a justification). Insurance companies would basically never share their claims data with their competitors, so there's functionally a ban on using competitor's data.

Any rate changes have to be justified (based on claims frequency and experience) to the state regulator. I don't think it's a perfect system by any means; insurance commissions aren't completely unbiased, and there's some flexibility in what data the insurer uses. But in my experience it's pretty effective at regulating the data you can and can't use.

The ultimate outcome is that most insurers in these markets run combined loss ratios of greater than 90% (so on an underwriting basis, more than 90% of the premium they earn goes to paying claims and overhead associated with managing those claims).

I think the model of "here's a regulatory body, justify what you're charging based on this set of allowed data" is a decent framework, even if it doesn't work in every market.

If you're curious, the SERFF website [0] has rate filings for a lot of states. So you can see when a rating factor changes and what it changed to. Most of the detailed claims data isn't available for data privacy reasons, but depending on the state you choose, there will be summary figures available.

[0]: https://www.serff.com/serff_filing_access.htm

> But how would a regulator find/prove algorithmic collusion?

They don't need to. At least in the US, courts look at the outcome and if the outcome is discriminatory that's the important part. This is under the idea of disparate impact. Beyond that, the realpage cases offer an example of modern day prosecution of algorithmic collusion.

Seems like an optimistic read on things. This is the kind of common-sense approach you would expect in a world without lawyers, just observing that collusion is bad because the effects are bad, and digging into the details of the causes are completely irrelevant for the public/plaintiff because it's really just on the company to fix the undesirable result.

IANAL but if realpages outcomes were definitive or reasonably generalized results dealing with the core issue, then similar arguments against e.g. Amazon would be a slam dunk. AFAIK, actual case outcome just hinges on details about "nonpublic data" and similar. Not remotely on bad effects for consumers or anything like that. Since printing realpages database in the newspaper would not actually help apartment-hunters, then this just tells landlords and third party markets how to do price-fixing legally next time? Most likely algorithmic pricing, surveillance pricing, etc is still coming to your grocery store after the issue is "settled" for property rental, or at least settled for realpages, in certain jurisdictions, for now.

> AFAIK, actual case outcome just hinges on details about "nonpublic data" and similar.

that sounds like insider trading. price fixing would need not involve nonpublic information (beyond the actual conspiracy to fix the prices as it helps to keep that part secret normally)

> “Settling Defendants have agreed not to provide nonpublic data to RealPage for use in competitor pricing recommendations and to refrain from using RealPage’s RMS that relies on non-public competitor data to make pricing recommendations,” attorneys wrote in the settlement filing.

https://www.multifamilydive.com/news/realpage-class-action-l...

I agree that "nonpublic" is barely related to the problem so how it's related to a solution is unclear. But it seems like this is the only general aspect of the outcome. Otherwise the outcome is just to stop doing this specific bad thing this specific time, and fines that are less than the profit made from bad behaviour.

This example works well in a vacuum, but not much else. You would see people filling up outside the small town, or see a third station open up to undercut the other two. Or, there is so much overhead that the high-high price is actually the fair price. Like grocery store monopolies that are bilking people so they can reap....1-2% profit margins.

> (i.e. in practical terms, there's no way regulators can police what algorithms sellers use - I can't think of exceptions to this, but perhaps there are some special cases)

One obvious answer would be to introduce a publicly-owned, zero-margin competitor not constrained by this algorithm, thus reintroducing an incentive to drive down costs or drive up quality.

What's interesting is that the surprise seems less about the behavior itself and more about how minimal the assumptions were

that doesn't even consider the buying the competitor across the street and paying lobbyist to have congress ignore you for the benefit of the consumer because with the combined stores you can gain market efficiencies. of course ignore the price gouging that actually happens.

> what can regulators do?

Regulators could say "you're not allowed to make more than X profit". They already do that with utilities, so it's not a matter of practical impossibility.

I continue to believe that in the case of oversized margins, the government should just enter the market themselves. Buy the smallest competitor and operate it at a reasonable margin, growing it at every opportunity. If the rest of the market lowers their margins to beat it, spin the thing off.

Basically don't bother to dictate margins, just declare that market a failure.

The problem with this is it ends up being a signal in of itself, so when you say, the cap is X you end up having everyone immediately set their profits to X and never budge from there