-
Zhang et al. (2025) — An Empirical Study on Prompt Compression for Large Language Models. arXiv. arxiv.org/abs/2505.00019
-
Qibang Liu, Wenzhe Wang, Jeffrey Willard (2025) — Effects of Prompt Length on Domain-specific Tasks for Large Language Models. arXiv. arxiv.org/pdf/2502.14255
-
TryChroma Research (2025) — Context Rot: How Increasing Input Tokens Impacts LLM Performance. research.trychroma.com/context-rot
-
Mosh Levy Alon Jacoby Yoav Goldberg (2024) — Same Task, More Tokens: the Impact of Input Length on LLMs. arXiv. arxiv.org/html/2402.14848v1
-
Databricks Engineering (2025) — Long Context RAG Performance of LLMs. databricks.com/blog/long-context-rag-performance-llms
-
Balarabe, T. (2024) — Understanding LLM Context Windows: Tokens, Attention, and Challenges. medium.com/@tahirbalarabe2
Optimalno dolžino navodila lahko razdelimo v kategorije:
✦ Preproste naloge
→ 50–100 besed (povzetki, kratka pojasnila, standardna vprašanja).
✦ Zmerna zahtevnost
→ 150–300 besed (analize, osnutki, kreativni povzetki).
✦ Zahtevne večdelne naloge
→ 300–500 besed (zapletene specifikacije, tehnična dokumentacija, izčrpna poročila).