Replies: 10 comments 13 replies
-
I tried chat-search-assistant today and it worked better at communicating to the user than all the other ones I tried. So I can study that one and build from there. But I'm still curious: Are many of the basic demos intended to demo what's happening internally with little or no communication to the user? Or is user communication intended, but not working due to the prompts being optimized for ChatGPT and working much less well with non-ChatGPT models? |
Beta Was this translation helpful? Give feedback.
-
This example is actually working as intended. The alternative way to design the task would be to set it up with |
Beta Was this translation helpful? Give feedback.
-
Just tested new variant: 1-agent-3-tools-address-user with Groq llama3.1 70b, but it didn't allow any input at all. I just got: WARNING - Task NumberAgent stuck for 5 steps; exiting. When I set Interactive back to True in then worked, and it did communicate with me (human user) after hitting an enter key. So the following did help make it more communicative: addressing_prefix=AT I also modified the system prompt slightly so that it wouldn't call me "|@|User, but would instead call me by my real name. Worked first try:
|
Beta Was this translation helpful? Give feedback.
-
@FilterJoe another way to have the system wait for the user, is to use the new |
Beta Was this translation helpful? Give feedback.
-
I must have changed some setting somewhere that's causing this and forgot what my change was. I'll stop bothering you with questions for a few days until I get a better handle on things as I continue to read and understand the many different parameters and how the code is structured. The code quality is really great. Part of what I why I picked langroid for my first agent framework is to study high quality code. I hope to adopt (what looks to me like) high quality coding style and design patterns into my own coding projects. Maybe I can come up with a good example or two to add to the project. Thank you for all your help! |
Beta Was this translation helpful? Give feedback.
-
(langroid-py3.12) jg@d1:~/PycharmProjects/langroid$ PYTHONPATH=$(pwd) python3 examples/basic/1-agent-3-tools.py
|
Beta Was this translation helpful? Give feedback.
-
In that last run you can see that sometimes after the : is empty when I just hit enter, and again it just showed JSON of the request. |
Beta Was this translation helpful? Give feedback.
-
I think the langroid 0.14 is coming from conda which is installed on that VM. Not sure if that is relevant given the python source code being present. The first VM I used to test langroid did not have conda, and therefore did not have langroid at all when I did pip show langroid. It was just running off the python code I pulled from git. |
Beta Was this translation helpful? Give feedback.
-
@FilterJoe I updated the examples so you can pass the model as a cmd line arg using In my previous examples I showed you, I was passing these args but they were ignored since the code wasn't set up to use them. As for langroid, you should be able to update to the latest (I never use conda, so can't advise there). python3 examples/basic/1-agent-3-tools.py -m groq/llama-3.1-70b-versatile gives this -- and python3 examples/basic/1-agent-3-tools-address-user.py -m groq/llama-3.1-70b-versatile gives this -- |
Beta Was this translation helpful? Give feedback.
-
It's fixed after git pull, poetry install. My system is now running just like yours. I'll let you know if I run into any other unexpected behaviors. |
Beta Was this translation helpful? Give feedback.
-
I've been testing (git clone of) Langroid for the first time in the past couple days, using a Debian 13 VM on a Mac. Poetry worked poorly so I had to install quite a few of the dependencies with pip.
I have it working. The super simple chat demo works fine. Beyond that I've had a few issues, but perhaps it is because I've never bothered to set up a ChatGPT account. I've been using Groq llama-3.1-70b-versatile mostly, but also attempted to use Gemini Pro (which failed miserably, likely due to some kind of end token issue leading to infinite loop) and Cohere Command R+ (did slightly worse than Groq).
The main issue is that when I try to run some of the demos/examples, it often does the actual work (which I can see in the log file) but doesn't actually communicate the result. Is this the intent behind these demos or does this indicate that the LLMs I've chosen aren't working as well as ChatGPT would have? Example:
1-agent-3-tools:
Sometimes if I try hard enough to convince the LLM to talk to me, it will. But I have noticed that sometimes it does that without even using the tool.
I've had other issues as well but this one is by far the most prevalent. Is there some setting I can play with to steer the agent to be more communicative after getting tool result?
Same thing happened with chat-search which was even more problematic because I would get the JSON showing that it invoked the tool, but no results. And then I would ask it to tell me the results and it would hallucinate a response with made-up links that didn't work (no access to earlier tool result?).
I built a tool of my own for doing one-shot chat-search with llama 3.1 that worked quite well but it took me many days to get it working the way I wanted. It would be great if I could figure out how to use langroid correctly to do the same thing with far fewer lines of code, and with more flexibility than my own little project.
Am I doing something wrong? Are there certain settings I need to play with?
Or does it only really work well with ChatGPT 4 (which I haven't tested yet)?
Beta Was this translation helpful? Give feedback.
All reactions