Auto-Differentiating Any LLM Workflow: A Farewell to Manual Prompting
(arxiv.org)13 points by meame2010 2 days ago | 4 comments
13 points by meame2010 2 days ago | 4 comments
hnuser123456 2 days ago | root | parent |
Congrats on the paper! I read through some of the github docs and read through the paper, this sounds very impressive, but I'm trying to think of how to best use this in practice... is the idea that I could give some kind of high-level task/project description (like a Python project), and this framework would intelligently update its own prompting to avoid getting stuck and to continue "gaining skill" throughout the process of working on a task? Could this be used to build such a system? Very curious to learn more.
meame2010 2 days ago | root | parent |
you need a training dataset, and a task pipeline that works. You can refer to this doc: https://adalflow.sylph.ai/use_cases/question_answering.html
hnuser123456 2 days ago | root | parent |
Thank you, I missed the use cases section, that explains a lot. Nice documentation. Might play with this when I get home.
meame2010 2 days ago |
Implemented in AdalFlow:https://github.com/SylphAI-Inc/AdalFlow