[X, 추천 피드 알고리듬 공개

X는 'For You' 피드의 개인화된 콘텐츠 추천 품질을 높이기 위해 머신러닝 기반 추천 시스템을 공개했습니다. 이 시스템은 Thunder와 Phoenix Retrieval이라는 두 가지 소스를 결합하여 게시물을 평가하고, Grok 기반 Transformer 모델인 Phoenix를 사용하여 최종 순위를 산출합니다. 시스템은 사용자의 활동 내역을 분석하여 관련성 있는 콘텐츠를 추천하며, 수작업으로 설계된 기능과 휴리스틱 알고리즘을 제거했습니다.

https://news.hada.io/topic?id=26010

#machinelearning #recommendationsystem #transformermodel #xplatform #apachelicense

X, 추천 피드 알고리듬 공개 | GeekNews

X "For You" 피드의 개인화된 콘텐츠 추천 품질을 높이기 위해 개발된 머신러닝 기반 추천 시스템팔로우한 계정(Thunder) 과 비팔로우 콘텐츠(Phoenix Retrieval) 2가지 소스를 결합해 피드 구성모든 후보 게시물을 Grok 기반 Transformer 모델인 Phoenix로 평가해 최종 순위를 산출이 모델은 각 게시물의 참여 확률을 예측

GeekNews

😂 Một người dùng thắc mắc: Làm sao quản lý 100+ cuộc trò chuyện ChatGPT? Lưu KV cache (tốn RAM) hay tính toán lại khi tiếp tục (tốn CPU)? Đang tìm giải pháp cân bằng từ các dev tự phát triển chatbot LLM. #MachineLearning #KVcache #ComputationalTradeoff #ChatbotDevelopment #MemoryOptimization #TríTuệNhânTạo #TốiƯuHiệuSuất #TransformerModel #GiaoTiếpAI

https://www.reddit.com/r/LocalLLaMA/comments/1q8eqtc/longterm_kv_cache_storage_or_reruns_for_ongoing/

South Korea’s Capital Market Institute marked its 28th anniversary with a conference on AI-driven innovation in financial investment, highlighting rising AI patent activity, the need for high-quality data, and the challenges of AI adoption in high-risk financial sectors.
#YonhapInfomax #AIFinance #CapitalMarketInstitute #FinancialInvestment #TransformerModel #DataQuality #Economics #FinancialMarkets #Banking #Securities #Bonds #StockMarket
https://en.infomaxai.com/news/articleView.html?idxno=81011
Capital Market Institute Marks 28th Anniversary with Conference on 'AI and Innovation in Financial Investment'

South Korea’s Capital Market Institute marked its 28th anniversary with a conference on AI-driven innovation in financial investment, highlighting rising AI patent activity, the need for high-quality data, and the challenges of AI adoption in high-risk financial sectors.

Yonhap Infomax
Had perhaps the #geekiest #tshirt ever printed up ..because well, we are going to be living with this, for better or worse, for a while. #TransformerModel #AttentionIsAllYouNeed #aihype #AIpocalypse
The post discusses the impact of GPT-4, the most advanced version of a transformer model, on the development of Generative AI tools. These tools create content mimicking a particular style using a self-attention mechanism. The post also highlights the potential fears associated with such tools. https://blog.cloudflare.com/secure-generative-ai-applications/ #GenerativeAI #GPT4 #TransformerModel #softcorpremium
How to secure Generative AI applications

Earn best practices for securing generative AI applications based on Cloudflare's experience protecting some of the largest AI applications in the world

The Cloudflare Blog

#ChatGPT is all the rage. What's this all about and what's the wider context? This paper gives a nice and thorough survey of #TransformerModel

https://arxiv.org/abs/2302.07730

Transformer models: an introduction and catalog

In the past few years we have seen the meteoric appearance of dozens of foundation models of the Transformer family, all of which have memorable and sometimes funny, but not self-explanatory, names. The goal of this paper is to offer a somewhat comprehensive but simple catalog and classification of the most popular Transformer models. The paper also includes an introduction to the most important aspects and innovations in Transformer models. Our catalog will include models that are trained using self-supervised learning (e.g., BERT or GPT3) as well as those that are further trained using a human-in-the-loop (e.g. the InstructGPT model used by ChatGPT).

arXiv.org

Interesting preprint about image retrieval in image generation models.

They find that #StableDiffusion generates the same sofa 20% of the time when prompted with "Canvas Wall Art Print".

The problem seems to be that the training dataset has many repeated images from printshops.

https://arxiv.org/pdf/2212.03860.pdf

https://laion-aesthetic.datasette.io/laion-aesthetic-6pls/images?_search=Original+Oil+Painting+Canvas+Wall+Art+Print&_sort=domain_id

#transformermodel #generativeart #aiart

Interesting investigation of how close generated images (Stable D) are to training set images.

(I'd avoid terms like "stealing" and "blatantly copy" -- results speak for themselves).

RT: @HxxxKxxx
Do diffusion models create unique works of art, or are they stealing content directly from their training sets?

📑Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models

Via @akhaliq @zentralwerkstatt
#transformermodel #generativeart #aiart

https://arxiv.org/abs/2212.03860

Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models

Cutting-edge diffusion models produce images with high quality and customizability, enabling them to be used for commercial art and graphic design purposes. But do diffusion models create unique works of art, or are they replicating content directly from their training sets? In this work, we study image retrieval frameworks that enable us to compare generated images with training samples and detect when content has been replicated. Applying our frameworks to diffusion models trained on multiple datasets including Oxford flowers, Celeb-A, ImageNet, and LAION, we discuss how factors such as training set size impact rates of content replication. We also identify cases where diffusion models, including the popular Stable Diffusion model, blatantly copy from their training data.

arXiv.org

Do diffusion models create unique works of art, or are they stealing content directly from their training sets?

📑Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models

Via @akhaliq @zentralwerkstatt
#transformermodel #generativeart #aiart

> https://arxiv.org/abs/2212.03860

Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models

Cutting-edge diffusion models produce images with high quality and customizability, enabling them to be used for commercial art and graphic design purposes. But do diffusion models create unique works of art, or are they replicating content directly from their training sets? In this work, we study image retrieval frameworks that enable us to compare generated images with training samples and detect when content has been replicated. Applying our frameworks to diffusion models trained on multiple datasets including Oxford flowers, Celeb-A, ImageNet, and LAION, we discuss how factors such as training set size impact rates of content replication. We also identify cases where diffusion models, including the popular Stable Diffusion model, blatantly copy from their training data.

arXiv.org

If text-to-image models such as #dalle2 can be thought of as searches on large amounts of image data, is it then theoretically possible, given the right input, to find/generate *exactly* one of the input images?

#transformermodel #GAN #generativeart #AIart #latentspace @Bildoperationen @Quasimondo