Confused. I just tried it in the Shopify Assistant and got:

There is no built-in Liquid property to directly detect Shopify Collective fulfillment in email notifications.

You can use the Admin GraphQL API to programmatically detect fulfillment source.

In Liquid, you must rely on tags, metafields, or custom properties that you set up yourself to mark Collective items.

If you want to automate this, consider tagging products or orders associated with Shopify Collective, or using an app to set a metafield, and then check for that in your Liquid templates.

What you can do in Liquid (email notifications):

If Shopify exposes a tag, property, or metafield on the order or line item that marks it as a Shopify Collective item, you could check for that in Liquid. For example, if you tag orders or products with "Collective", you could use:

  {% if order.tags contains "Collective" %}
    <!-- Show Collective-specific content -->
  {% endif %}
or for line items:

  {% for line_item in line_items %}
    {% if line_item.product.tags contains "Collective" %}
      <!-- Show something for Collective items -->
    {% endif %}
  {% endfor %}
In the author's 'wrong' vs 'seems to work' answer, the only difference is the tag on the line items vs, the order. The flow (template? as he refers to it as 'some other cryptic Shopify process' ) he uses in his tests does seem to add the 'Shopify Collective' tag to the line items, and potentially also to the order if the whole order is Shopify Collective fullfilled, but without further info we can only guess his setup.

While using AI can always lead to non-perfect results, I feel the evidence presented here does not support the conclusion.

P.S. Given the reference to 'cryptic Shopify processes', I wonder how far the author would get with 'just the docs'.

So because you got a good response the conclusion is invalid? How does the user know if they got a good response or a bad one? Due to the parameters passed most LLMs are functionally non-deterministic, rarely giving the same answer twice even with the same question.

I just asked chatgpt "whats the best database structure for a users table where you have users and admins?" in two different browser sessions. One gave me sql with varchars and a role column using:

    role VARCHAR(20) NOT NULL CHECK (role IN ('user', 'admin')),
the other session used text columns and defined an enum to use first:

    CREATE TYPE user_role AS ENUM ('user', 'admin', 'superadmin');
    //other sql snipped
    role user_role NOT NULL DEFAULT 'user',
An Ai Assistant should be better tuned but often isn't. That variance to me makes it feel wildly unhelpful for 'documentation' as two people end up with quite different solutions.

So by extrapolation all of the IT books of the past were "wildly unhelpful" as no two of them presented the exact same solution to a problem, even all those pretending to be 'best practice'?

Your question is vague (technical reference, not meant derogatory). In which DBMS? By what metric of 'best'? For which size of database? Does it need to support internationalization? Will the roles be updated or extended in the future etc.

You could argue an AI Assistant would need to ask you this clarification if the question is vague rather than make a guess. But in extremis this is in practice not workable. If every minute factor needs to be answered by the user before getting a result, only the very experts would get to the stage of getting an answer if ever.

This is not just an AI problem, but a problem (human) business and technical analysts face every day in their work. When do you switch to proposing a solution rather than asking further details? It is BTW also why all those BPM or RPA platforms that promise to eliminate 'programming' and let the business analyst 'draw' a solution often fail miserably. They either have too narrow defaults or keep needing to be fed detail long past the BA's comfort zone.

I think you're making the author's point, though. If two users ask the bot the same question and get different answers, is the bot valuable? A dice roll that might be (or is even _probably_) correct is not what I want when going directly to the official docs.

Not sure the author is giving the full account though, as his answer snippet was probably just a part of the same answer I got, framed and interpreted differently (the AI's are never this terse as to just whip out a few lines of code).

Besides, it is not even incorrect in the way he states it is. It is fully dependent on how he added the tags in his flow, as the complete answer correctly stated. He speculates on some timing issue in some 'cryptic Shopify process' adding the tag at a later stage, but this is clearly wrong as his "working answer" (which is also in the Assistant reply) does rely on the tag having been added at the same point in the process.

My pure and exaggerated on purpose speculation: He just blindly copied some flow template, then from the (same as I got?) Assistant's answer copy/pasted the first Liquid code box, tested on one order and found it not doing what he wanted, this suited his confirmation bias regarding AI, later tried pasting the second Liquid code box (or the same answer you will get from Gemini through Google Search) and found 'it worked' on his one test order, still blamed the Assistant for being 'wrong'.

its non-deterministic. it gives different answers each time you ask, potentially, and small differences in your prompt yields different completions. it doesnt actually understand your prompt, you know.