(Please boost for reach. I'm intentionally not using hashtags in this post because that would obviously bias the sample horribly.)
@internic
I tried hashtag searching when I was new-ish here, and it didn't work out for me, so I've ignored them since.
It's possible I just didn't know what I was doing; sometimes I think I should look into it again.
@dougmerritt @internic
I tag my posts in the same way I use βkeywordsβ in my peer-reviewed papers. So, my use of tags tend to be more transmission than reception.
In my experience, tag searches are no more effective than ordinary word searches.
@benjamingeer @AmenZwa @dougmerritt @benjamingeer @brettm @SeaJay I think there are probably a lot of conditions that determine whether hashtags are useful:
1. There is one or a few obvious hashtags that you would use for a given topic.
2. It won't collide with other unrelated words that are written the same way (in a language widely used on the platform).
3. There is an obvious hashtag that's sufficiently specific to target a specific audience (rather than being a mix of very different things).
4. There is not too much spamming of the hashtag with low quality content (from bots or people habitually using the hashtag on large numbers of posts).
5. There is a critical mass of users who use the hashtag within a given time interval (either it's just a popular topic on the platform, the platform is so large that the long tail effect is in operation, or it's related to some specific event that happens in a short period of time with heightened interest).
For example, I think things like programming languages work on Fedi because they have an agreed upon name, it's often not something that people would be likely to hashtag with a different meaning, it's not overly broad, and Fedi is full of coders.
I think I've tried to use them on topics that don't meet some or all of those criteria, like questions about Apple software (where there is plenty of interest in the topic, but it's broad and gets some amount of spam) or physics (where there's a much smaller set of users who are interested, and it's a broad topic where people can be interested in very different things, both in terms of sub-fields and level of sophistication).
@SeaJay
Very true. But, mate, we don't need tags to highlight this imbecile's stupidityβres ipsa loquitur.π€£
@brettm I don't really post about politics on Mastodon, so this isn't directly relevant to me, but I think the way Mastodon (and other fedi software) uses content warnings is inherently unworkable at in a large, heterogeneous network.
The basic issue is that there are many different issues for which subsets of users may want CWs, and users are not generally mutually aware of them. In plenty of cases there are pretty strong reasons to want them, but they may only applicable to a small subset of folks (think, for example, various phobias). So the only logical courses of action are CW everything (decreasing the utility and hampering usability) or use a different mechanism.
As evidence of this, I have seen people argue about CWs on food, things with eyes, things with holes, insects, and black and white pictures, among other things. Topics that would widely be considered innocuous but at least some of which are legitimately problematic for a certain subset of people. There just isn't consensus on what should have CWs, and in a large, diverse network I would not expect there to ever be. So I generally view trying to make CWs work as futile.
An approach somewhat similar to the one Bluesky uses is probably more reasonable, where you have a "labeler" that flags posts as containing certain content and then allow the user to control how that is handled (show, hide completely, or CW). Labeling can be based on hashtags applied by users but can also be augmented by algorithmic processing where practical (e.g. computer vision).
@brettm I would consider the people with phobias or PTSD to be the most compelling case for CWs (where not using them can do real, significant harm to people). Your argument seems to be more about finding certain posts annoying, in which case it seems reasonable to place the onus on the person with the specific preferences to filter content.
But, again, even if I thought the reasons were good, it simply will not work as a system. Some people may find US politics legitimately problematic for their mental health, many just find it annoying, many others want to see it and find CWs annoying, and some believe that others need to see that content and hiding it behind CWs is actively harmful. There's not a consensus on how to handle it, and I don't see any reason to expect one to emerge, so trying to get people to use this mechanism is waste of time and effort.
Finally, I think the attempt among random users to police behavior with respect to content warnings when there is no consensus leads to the bad "home owners association" (HOA) reputation that Fedi has on most other parts of the Internet, where people feel they're constantly being berated for not adhering to unwritten and inconsistent standards. I have personally seen this exact issue drive several interesting posters to either leave Fedi or reduce their activity here. So, in my view, the sooner people abandon trying to police CWs and shift to something more plausibly workable the better.