You are supposed to manually set scale to 1.0 and base to 10000 when using llama 2 with 4096 context. The automatic scaling assumes the model was trained for 2048. Though as I say I’m the OP, that still doesn’t work, at least with this particular fine tune.
is the 4k context length of llama2 for real?
https://sh.itjust.works/post/2107112
is the 4k context length of llama2 for real? - sh.itjust.works
I’ve been using airoboros-l2-70b for writing fiction, and while overall I’d
describe the results as excellent and better than any llama1 model I’ve used, it
doesn’t seem to be living up to the promise of 4k token sequence length. Around
2500 tokens output quality degrades rapidly, and either starts repeating
previous text verbatim, or becomes incoherent (grammar, punctuation and
capitalization disappear, becomes salad of vaguely related words) Any other
experiences with llama2 and long context? Does the base model work better? Are
other fine tunes behaving similarly? I’ll try myself eventually, but the 70b
models are chunky downloads, and experimentation takes a while at 1 t/s. (I’m
using GGML Q4_K_M on kobold.cpp, with rope scaling off like you’re supposed to
do with llama2)
Those are OpenCL platform and device identifiers, you can use clinfo to find out which numbers are what on your system.
Also note that if you’re building kobold.cpp yourself, you need to build with LLAMA_CLBLAST=1 for OpenCL support to exist in the first place. Or LLAMA_CUBLAS for CUDA.
What’s the problem you’re having with kobold? It doesn’t really require any setup. Download the exe, click on it, select model in the window, click launch. The webui should open in your default browser.
all... Subscribed and local seem to be working correctly, some of the posts on them are new
Top day is a good tip. Though I do think something is broken, seems too unlikely that this particular batch of shitposts is so uniquely hot it stays up all day when before the feed was moving quite fast
Why is the front page suddenly so stale?
https://sh.itjust.works/post/77683
Why is the front page suddenly so stale? - sh.itjust.works
The All feed on both Hot and Active modes is exactly the same as it was most of
a day ago, all the same posts in the same order, except they’re all 12h+ old
now. At first I thought federation might not be working due to overload, but on
New posts are coming in all the time, both local and remote. What’s up?
Reddit has over 2,000 employees most of whom are doing bullshit nobody using the site actually needs or wants, it's possible to run a lot leaner than that. Like Reddit itself used to, before they started burning hundreds of millions trying to compete with every other social media site at once instead of being Reddit