It’s the human review layer that works alongside Xeet’s scoring to handle what models can’t reliably read yet. The component handles almost everything, since it’s trained on extensive human-labelled examples, and humans cover the flagged or outlier cases where nuance matters.
This mix keeps scoring efficient and context-aware.
