Tiny Mastodon Tip to Verify Yourself
🐘:
The checkmark icons on Mastodon are not official. They are simply custom instance icons people add to their name, like I have on mine, usually for fun
But, there is a way to verify
yourself on Mastodon! And it’s completely free!
It works by providing a proof of ownership of a website you control. This is especially important for organizations to show followers their account is official.
HOW TO ❓
1. On desktop, go to Preferences > Public profile, then click on the “Verification” tab at the top ✔
2. Click on “Copy” to copy the HTML code containing a link to your Mastodon profile.
3. Go to the website page you control and paste the code somewhere in the HTML code of the page.
In the HTML code, you can keep the hyperlink on the “Mastodon” word as it is per default, or you can attach it to a social media icon for example
Alternatively, you can make this link invisible (like I did on mine) and simply place this code within your website's head:
`<link href="FULL-LINK-TO-YOUR-MASTODON-ACCOUNT" rel="me">`
4. Once your website is up and updated, copy the full link to the page where you pasted your Mastodon account link.
5. Back to your Mastodon account, go to Preferences > Public profile, and here look for the section "Extra fields"
6. Paste your page's link in a “Content” field and “Label” it as you wish. It could be labelled “Website” or “Personal Page” or anything you like!
7. Click on the “Save changes” button at the bottom
8. Your verified link(s) should appear in green on your Mastodon profile page once it is validated (this could take a few hours to resolve so be patient).
9. Magic!✨

Language models (LMs) exhibit remarkable abilities to solve new tasks from just a few examples or textual instructions, especially at scale. They also, paradoxically, struggle with basic functionality, such as arithmetic or factual lookup, where much simpler and smaller models excel. In this paper, we show that LMs can teach themselves to use external tools via simple APIs and achieve the best of both worlds. We introduce Toolformer, a model trained to decide which APIs to call, when to call them, what arguments to pass, and how to best incorporate the results into future token prediction. This is done in a self-supervised way, requiring nothing more than a handful of demonstrations for each API. We incorporate a range of tools, including a calculator, a Q\&A system, two different search engines, a translation system, and a calendar. Toolformer achieves substantially improved zero-shot performance across a variety of downstream tasks, often competitive with much larger models, without sacrificing its core language modeling abilities.
This paper proposes a framework for quantitatively evaluating interactive LLMs such as ChatGPT using publicly available data sets. We carry out an extensive technical evaluation of ChatGPT using 23 data sets covering 8 different common NLP application tasks. We evaluate the multitask, multilingual and multi-modal aspects of ChatGPT based on these data sets and a newly designed multimodal dataset. We find that ChatGPT outperforms LLMs with zero-shot learning on most tasks and even outperforms fine-tuned models on some tasks. We find that it is better at understanding non-Latin script languages than generating them. It is able to generate multimodal content from textual prompts, via an intermediate code generation step. Moreover, we find that ChatGPT is 63.41% accurate on average in 10 different reasoning categories under logical reasoning, non-textual reasoning, and commonsense reasoning, hence making it an unreliable reasoner. It is, for example, better at deductive than inductive reasoning. ChatGPT suffers from hallucination problems like other LLMs and it generates more extrinsic hallucinations from its parametric memory as it does not have access to an external knowledge base. Finally, the interactive feature of ChatGPT enables human collaboration with the underlying LLM to improve its performance, i.e, 8% ROUGE-1 on summarization and 2% ChrF++ on machine translation, in a multi-turn "prompt engineering" fashion. We also release codebase for evaluation set extraction.
We present LaMDA: Language Models for Dialog Applications. LaMDA is a family of Transformer-based neural language models specialized for dialog, which have up to 137B parameters and are pre-trained on 1.56T words of public dialog data and web text. While model scaling alone can improve quality, it shows less improvements on safety and factual grounding. We demonstrate that fine-tuning with annotated data and enabling the model to consult external knowledge sources can lead to significant improvements towards the two key challenges of safety and factual grounding. The first challenge, safety, involves ensuring that the model's responses are consistent with a set of human values, such as preventing harmful suggestions and unfair bias. We quantify safety using a metric based on an illustrative set of human values, and we find that filtering candidate responses using a LaMDA classifier fine-tuned with a small amount of crowdworker-annotated data offers a promising approach to improving model safety. The second challenge, factual grounding, involves enabling the model to consult external knowledge sources, such as an information retrieval system, a language translator, and a calculator. We quantify factuality using a groundedness metric, and we find that our approach enables the model to generate responses grounded in known sources, rather than responses that merely sound plausible. Finally, we explore the use of LaMDA in the domains of education and content recommendations, and analyze their helpfulness and role consistency.
If you've noticed Mastodon.social being offline a few evenings this week, it was actually under DDOS those nights. When I asked, I was told the best way to help with Mastosoc's scaling and resiliency-under-pressure issues would be to help them fill this open job posting for a devops position with Mastodon gGMBH:
https://jobs.ashbyhq.com/mastodon/290fd40f-125e-41fc-942d-f4ce59e6bda2
If you know anyone who might be qualified for this position, pass it on!
The energy of a newsroom. The pace of a trading floor. At Bloomberg, we connect decision makers to a dynamic network of information, people and ideas. At the core of this network is our ability to deliver data, news and analytics through innovative technology – quickly and accurately. Around the clock, around the globe – we empower business leaders with breaking news, expert opinion and proprietary data distributed across platforms. THE ROLE: Bloomberg News is one of the world's biggest and mo
#Mastodon is hiring!
› Remote-only
› Full-time
Looking for:
› DevOps Engineer
› Product Designer
It could be you! Apply now: