dtagames 6 hours ago

Telling an LLM that something is a fact or the thesis doesn't make it one. We can't get around the predictive nature of how models and transformers operate by using different tokens. It's still just tokens, all the way down.

In fact, your complicated prompt will probably lead to summaries that have incorrect "facts" in them and arguments that don't fit your "thesis." That's because that text exists in the training data and you can't hand-wave it away with promoting.

  • nsavage 5 hours ago

    I see what you’re saying, this works a little differently in that its asking the LLM what it thinks the writing is trying to say and what the writing uses to support it. Agreed that hallucinations are an issue though.

    • dtagames an hour ago

      "What it thinks" is where you've gone wrong. There's no thinking happening and the answer to that is just another prediction.