Swift is a more convenient Rust (2023)
https://nmn.sh/blog/2023-10-02-swift-is-the-more-convenient-rust
| Official | https:// |
| Support this service | https://www.patreon.com/birddotmakeup |
Swift is a more convenient Rust (2023)
https://nmn.sh/blog/2023-10-02-swift-is-the-more-convenient-rust
Is there any work on reverse engineering LLMs, especially the closed source API ones? For example, how can we learn about the data used in Claude Sonnet 4.5 training?
And more tricky but as important, is there any work on extrapolating the pretrained model AFTER it's RLHF'd? For example, what kinds of biases did exist in gpt-4o before it was unbiased?
Do biases go away completely or they just get suppressed down deep in the model's "mind"?
Run LLMs on Apple Neural Engine (ANE)