Please, contribute to the 20th ACM Conference on Recommender Systems (#RecSys2026) by submitting nominations for the Women in #RecSys Journal Paper of the Year Awards.
Title: P3: Prompt-engineering CoT [2024-11-20 Wed]
It's application are “Recomendation Systems with
Generative Retrieval” https://arxiv.org/pdf/2305.05065
that use Transformer model with embedding retrival for
RecSys.
In contrast to continuous VAEs, QV-VAE uses discrete
latent representation of a finite set of learned
embeddings.
VQ-VAE avoids the issue of "posterior collapse" often seen
in VAEs, where the latent variables are ignored by a #dailyreport #promptengineering #vae #recsys #emacs
Title: P0: Prompt-engineering CoT [2024-11-20 Wed]
I have been reading "Chain-of-Thought Reasoning without
Prompting" https://arxiv.org/pdf/2402.10200
Technique that increase reasoning with costs of LLM
computations by keeps track of multiple potential
sequences at each step, then selects the top ‘k’ most
probable sequences from these new sequences. It is beam
approach than replce "gready decoding" approch by just #dailyreport #promptengineering #vae #recsys #emacs
Title: P3: Negative sampling in NLP [2024-11-03 Sun]
and negative samples.
Example "The dog is playing with a bone," and assume a
window size of 2 positive samples for the target word
"dog" would include:
- ("dog", "The")
- ("dog", "is")
- ("dog", "playing")
- ("dog", "with")
- ("dog", "a")
- ("dog", "bone")
Negative Samples: ("dog", "car"), ("dog", "apple"),
("dog", "house"), ("dog", "tree")
calc: logσ(vdog⋅vbone) + #dailyreport #negativesampleing #sampling #llm #recsys