Zed's Pricing Has Changed: LLM Usage Is Now Token-Based - Zed Blog

From the Zed Blog: We're moving Zed AI from prompt-based limits to token-based pricing.

Our results show how those self-reported behaviors translate into real-world #LLMusage in final publications may be closer to the highest lower bounds we observed, as those may be corpora where LLM usage is the most naïve & the easiest to detect.
This shows that the #LLMusage in 2024 was at least two times higher than the size of COVID-related literature at its peak in. Lower bounds differed between subcorpora Given these potential explanations for the heterogeneity in the lower bound of #LLM use for #scientificediting,
These estimates are above 30%, which is in line with recent surveys on researchers’ use of LLMs for manuscript writing. Our results show how those self-reported behaviors translate into real-world #LLMusage in final publications may be closer to the highest lower bounds we observed, as those may be corpora where LLM usage is the most naïve & the easiest to detect. These estimates are above 30%, which is in line with recent surveys on researchers’ use of LLMs for manuscript writing .
events such as the COVID pandemic
#RESULTS:
Excess words indicate widespread LLM usage
For comparison, using the set of four excess content words from 2021, covid, pandemic, coronavirus, sars (any scientific paper on COVID-19 likely contained at least one of these four words in its abstract) yielded frequency gap Δ = 0.069. This shows that the #LLMusage in 2024 was at least two times higher than the size of COVID-related literature at its peak in