A discovery / fresh articulation I had in class today: one of the things I love about #recsys is the direct connection between math and human experience. Here's a statistical property, and here's why naïvely using it would turn your recommender into a door-to-door missionary for the Good News of Bananas.

Please, contribute to the 20th ACM Conference on Recommender Systems (#RecSys2026) by submitting nominations for the Women in #RecSys Journal Paper of the Year Awards.

https://recsys.acm.org/recsys26/women-in-recsys/

RecSys – ACM Recommender Systems

Women in RecSys

RecSys
Title: P4: Prompt-engineering CoT [2024-11-20 Wed]
powerful decoder. The discrete nature of VQ-VAE ensures
that the latent variables are not collapsed and are
actively used in the model.
- - - - - - - - - - - - - - - - - -
I published calendar with holidays and biggest conferences
for Emacs at MELPA for 2024 and 2025. I am going to
maintain this calendar to promote EmacsConf, FOSDEM and
AI conferences.
#dailyreport #promptengineering #vae #recsys #emacs

Title: P3: Prompt-engineering CoT [2024-11-20 Wed]
It's application are “Recomendation Systems with
Generative Retrieval” https://arxiv.org/pdf/2305.05065
that use Transformer model with embedding retrival for
RecSys.

In contrast to continuous VAEs, QV-VAE uses discrete
latent representation of a finite set of learned
embeddings.
VQ-VAE avoids the issue of "posterior collapse" often seen
in VAEs, where the latent variables are ignored by a #dailyreport #promptengineering #vae #recsys #emacs

Title: P2: Prompt-engineering CoT [2024-11-20 Wed]
more.
- CoT prompting constrains the model to follow an
artificial strategy curated through human knowledge and
intervention which could be biased by the prompt
designers.
- - - - - - - - - - - - - - - - - -
I have been reading about Residual Vector Quantisation
Variational AutoEncoder RQ-VAE.
- https://arxiv.org/pdf/1711.00937
- https://notesbylex.com/residual-vector-quantisation #dailyreport #promptengineering #vae #recsys #emacs
Title: P2: P1: Prompt-engineering CoT [2024-11-20 Wed]
answering shortly. Ex. “Yes. No. Idk”. Models are
highly influenced by the distribution they have been
trained on.
- Model starts to struggle with generating the correct
CoT-paths when the steps become 3 or #dailyreport #promptengineering #vae #recsys #emacs
Title: P1: P1: Prompt-engineering CoT [2024-11-20 Wed]
selecting most probable next word. This give a big
reasoning boost.
- LLMs can reason if we consider the alternative decoding paths.
- Model is predisposed to immediate problem-solving, by #dailyreport #promptengineering #vae #recsys #emacs

Title: P0: Prompt-engineering CoT [2024-11-20 Wed]
I have been reading "Chain-of-Thought Reasoning without
Prompting" https://arxiv.org/pdf/2402.10200

Technique that increase reasoning with costs of LLM
computations by keeps track of multiple potential
sequences at each step, then selects the top ‘k’ most
probable sequences from these new sequences. It is beam
approach than replce "gready decoding" approch by just #dailyreport #promptengineering #vae #recsys #emacs

Title: P4: Negative sampling in NLP [2024-11-03 Sun]
logσ(−vdog​⋅vcar​)+logσ(−vdog​⋅vapple​)+logσ(−vdog​⋅vhouse​)
#dailyreport #negativesampleing #sampling #llm #recsys

Title: P3: Negative sampling in NLP [2024-11-03 Sun]
and negative samples.

Example "The dog is playing with a bone," and assume a
window size of 2 positive samples for the target word
"dog" would include:
- ("dog", "The")
- ("dog", "is")
- ("dog", "playing")
- ("dog", "with")
- ("dog", "a")
- ("dog", "bone")

Negative Samples: ("dog", "car"), ("dog", "apple"),
("dog", "house"), ("dog", "tree")

calc: logσ(vdog​⋅vbone​) + #dailyreport #negativesampleing #sampling #llm #recsys