How to use LLaMAFactory's WebUI for Chat inference
#1
by SuponjiAyume - opened
I have downloaded your model locally, and also downloaded Qwen2.5-32B, set it up as shown in the picture, and the template selected qwen, but the answer seems to be looping all the time, and in the output of some content that I didn't mention, where is the problem with my settings? Thank you for your patience and guidance!