I don't think people appreciate the role that #OperaSoftware played in fostering the #OpenWeb and #IndieWeb during the first #browserWar (when the #OperaBrowser was still built on their proprietary #Presto engine), and a fortiori the role it had in their demise (when they switched to being “just another #WebKit/‌#Blink skin”), despite their browser never even reaching a 3% market share.
In the five years between the creation of the #WHATWG and the switch from Presto to WebKit (and then Blink) by Opera, their role within the working group was essential as an independent standard implementor. Anything that was supported by two out of three (at the time, Apple, Mozilla, Opera) vendors meant different _engines_ implemented the standard. Today, three out of five implementations agreeing is meaningless, since they are most likely just WebKit and its forks.
The Opera/Presto browser was pretty close to being a “swiss army knife” for the web. Aside from the #browser with a solid and modern rendering engine with decent standard support (for the time), it also integrated (in the same UI!) a workable #email client, a decent #IRC client, and a competitive #RSS reader. The browser itself not only had better support for web standards than some of the competitors (including WebKit) in many areas, but it also put effort in supporting #microformats
As an example of how the Opera UI fostered web standards, not only it did automatic feed discovery (allowing subscription to RSS feeds even if they weren't announced on the visible part of the web page), but it famously featured a navigation bar with next/prev/up/top links that could be extracted from appropriately rel-marked link elements in the page (and for many common cases even when they were NOT properly rel-marked).
But the most impressive (and underrated) feature of Opera was #OperaUnite. First introduced in 2009 in a beta release of Opera 10.10, Opera Unite was a web server that allowed JavaScript server-side scripting to write small static and dynamic websites that were accessible either directly (using UPnP to expose it on the Internet) or through a proxy service offered by Opera itself.
https://web.archive.org/web/20120121122103/http://dev.opera.com/articles/view/opera-unite-developer-primer-revisited/
Opera Unite developer’s primer — revisited - Dev.Opera

Dev.Opera is the ultimate source of distilled knowledge for web developers, covering the latest open web technologies and techniques including HTML5, CSS3, JavaScript, SVG, optimizing content for mobiles, tablets and TVs, and creating add-ons such as extensions and themes for the Opera browser.

Read that again: in the years before its demise, the Opera/Presto browser not only integrated features to access a large chunk of the Internet aside from the web (email, USENET, IRC), but it featured a web server.
In a period where most major players were working towards centralization of the web, Opera pioneered an effort that —if successful— would have made it possible for every Internet user to take both a passive and an active role in its participation.
Opera in the Presto days was a pioneer. Anybody that enjoys a #PWA (#ProgressiveWebApp) today should be aware of the efforts made by Opera to standardize their Widgets feature, even if the standard they promoted was ultimately obsoleted by the current one, that relies on modern client features that were not available at the time.
The Opera-designed “demonstrative” Unite Applications were media, photo and file sharing applications. Does that make you think of anything?
Sometimes I wonder how different things could have been if the timing had been different. When #OperaUnite was first announced, #ActivityPub wasn't a thing yet, StatusNet had just been born, diaspora* didn't exist, and the only other major bidirectional federated protocol was XMPP, that had existed for 10 years and was in the process of being #EmbraceExtendExtinguish-ed by Facebook and Google.

I have no problems imagining a different timeline, where #ActivityPub had been already a better-established thing, and the demo #OperaUnite applications for media and photo sharing had implemented basic support for it, resulting in self-hosted lightweight alternatives to #PixelFed or #FunkWhale.

And this is actually the vision I have an ultimate goal for the #Fediverse, one where, thanks also to client support, hosting and participation become even more trivial than setting up a static website.

In many ways, Opera giving up on their Presto engine, marked not only the end of the browser war, with WebKit/Blink the uncontested winner, but it also marked the end of truly inspiring (inspired?) client innovations for the open Internet, although possibly not entirely by its own fault, since in the same period Firefox also largely seemed to “give up” on that front, even going as far as removing features they had (such as their RSS support).
With the modern #OperaBrowser now just a derelict ghost of its past self, hooked into proprietary initiatives (think of its Messenger for closed silo networks) and cryptocurrency shilling, some of its legacy is now being carried by _another_ Chromium skin/fork: @Vivaldi
Although I do not appreciate it being partially closed source, or its reliance on Blink (that for example precludes #JpegXL), it does seem to be still interested in keeping the spirit of the “swiss army knife of the (open) web”.
One of the interesting ways in which this shows up is that in addition to email, RSS and calendars, Vivaldi has also actively promoted support for #Mastodon, in a very simple yet effective way (providing a Web Panel for their instance; you can add your own). I expect the same will work on other #Fediverse platforms, as long as they provide a functional web interface with good “small screen” support (since this is effectively what the Web Panels use).
(Off topic for the thread, but playing around with the Vivaldi Web Panel features I discovered that keyboard input in my #FingerMaze game is broken in case of vertical layout. It was designed for cellphones so it's not a big loss, but still, I think I should fix it.)
(OK, now that I fixed #FingerMaze for keyboard use in rotated view and you can finally play it in a Vivaldi Web Panel, let's get back on topic.)

The #VivaldiBrowser is the closest thing we have to an “swiss army knife for the open Internet” today, and yet it doesn't even have feature parity with the late Opera/Presto. For example, it has no IRC client.

But in the context of my vision for the #Fediverse, the most glaring omission is the lack of an equivalent to Opera Unite, an incentive to the development of easy-to-deploy self-hosted websites.

Even if Vivaldi (the company) did share my vision of an open web, I have my doubts that it has the energy and workforce necessary push it. The fact that their main product is proprietary (despite the abundance of open source software they leverage) is also a downside.
Getting Mozilla on board would be of great help in his, but considering the downwards direction they have taken with Firefox, that's even less likely (seriously, not even RSS?)
Which is a pity, because two independent browsers implementing support for a common lightweight server applications in the spirit of the Opera Unite applications could be a a major push in the right direction. And even if Vivaldi did invest in something like that, their efforts alone would get nowhere.
People may dismiss the usefulness of the “swiss army knife” concept pushed by Opera/Presto up to 10 years ago and by @Vivaldi now, citing “bloat”, “lack of focus” or the classic principle of doing “one thing well” instead of a 100 things poorly (sometimes called the Unix philosophy). There is merit to the objection, but I have never seen it put in practice as it should be: on the contrary, feature rejection, or even worse removal, have been to the detriment of “doing one thing well”.

Two of my #petPeeves in this regard are with #Mozilla #Firefox, and in both cases they are about feature removal because of perceived bloat.

The first is the removal for the support of the #MNG format. The purported reason for this was the “bloat” coming from linking a 200KB library. Reading the issue tracker for this, 20 years later when Firefox installations are 200MB and counting is … enlightening:
https://bugzilla.mozilla.org/show_bug.cgi?id=18574

I still care about the MNG format support not for the format itself —it's quite clear that it irredeemably failed— but because the same argument can be used in the future to stymie adoption of other formats such as #JpegXL, which is currently supported in Firefox Nightly, and will likely receive the same treatment (I wonder if with the same excuses) now that Google has decided to drop support for it from Chrome.
IOW, the issue isn't so much with the specific format (although that has its importance: MNG was the best we had at the time for a unified format that supported animation, transparency and optionally lossy compression), but the active choice to not uphold the interests of the #openWeb. The same thing holds for the second pet peeve of mine: Mozilla's decision to remove RSS and Atom feeds support.
Firefox had some support for all three aspects of web feeds support (discovery, visualization, subscription), and it was all wiped out with the release of Firefox 64, with maintenance cost being the (purported) reason:
https://www.gijsk.com/blog/2018/10/firefox-removes-core-product-support-for-rss-atom-feeds/
Even if we accept the motivations and that WebExtensions would be the best way to reimplement the features, the question remains: why didn't Mozilla provide an official extension for it?
Firefox removes core product support for RSS/Atom feeds | Use Tables!

If you want an example of why an absence of feed discovery built into the browser (or at least offered through a default-installed official extension), consider this recent post on @fediversenews by @atomicpoet
https://mastodon.social/@atomicpoet/109900176041961778
—having to jump through hoops, looking at the page source code to find web feeds because the browser has removed the discovery feature is something that can trip even competent experts.
(And yes, the website _could_ advertise the presence of the feeds on the visible part of the page, and the absence of visible links _is_ to be blamed on them, but on the other hand: why duplicate the information when the browser can (and actually used to!) show you the information advertised in the document metadata, where it is supposed to be?)

#Mozilla's choice to remove their built-in web feed support without providing an official extension to carry on the legacy is another strike to the #openWeb and #indieWeb on their side.

I often wonder what has been going on inside #Mozilla. #Firefox reached its largest market share (around 30%) some 10 years ago. Since then, it has been inexorably losing market share. There is little doubt that this has been largely due to the growth of mobile and Google's unfair marketing advantage, BUT:

I have little doubt that Mozilla's response has been the worst possible one: they have chosen to get into a “race to the bottom” based on mimicry instead of playing to their strengths or finding new ones through innovation. I can't say for sure that their market share wouldn't have fallen this quickly if they had taken a different path, but I know for sure that there are people who switched because Firefox didn't have anymore a compelling reasons to be used over the competition.

Again, this isn't about #MNG or #JpegXL or #RSS or web feeds support _specifically_: it's about the priority policies.

I do understand and appreciate that even just the maintenance of the engine to keep the pace with the evolution of the web standards is a huge undertaking —it's why so many browsers have just given up and chosen to “leech” on WebKit or Blink instead.

When the only reason to use your browser is that it's the only FLOSS alternative to Google's, you have a problem.

The fact that @Vivaldi, a Chromium reskin with some proprietary glue, has more personality than #Firefox (that doesn't even seem to have a Fediverse presence) is something that should really be a wake-up call for @mozilla

And before anybody gets into the comments to praise #Mozilla for its history of web standards and user privacy defense —I don't need you to remind me of that. That's not the point. The point is that to actually be able to do that you need something more than “I'm not Google”.

And the irony here is that while #Firefox has nothing to claim for itself other than “not Google”, @Vivaldi _does_, even if it's still using Blink as web engine, and is thus subject to Google's whims on that side (one example for all: concerning #JpegXL support). Heck, even the new Opera is more than just “not Google” — although if it's pursuing all the wrong “personality” traits for that.
Why is having a personality important? Because it's one of the pillars on which your capability to defend your position is founded. Mozilla cannot protect web standards through Firefox if their go-to solution is to remove support for standards that don't get the adoption they wish for in the timeframe they expect: nobody is going to adopt a standard if there is a credible threat for support for it being senselessly removed in the near future.
The #DoNotTrack header has been deprecated, and has been largely useless because it was never adopted by most advertisers, using the cop-out of it not being legally binding. Despite this, #Firefox (and most other browsers, with the only exception of Apple's Safari AFAIK) still support sending the header, despite it being arguably a waste of bandwidth and implementation resources (UI options to control its settings, JS access to it, etc). Why do they still do it?
Because it's part of their personality: even if just at face value, DNT header support is a signal that the browser cares about user privacy.
(Don't get my started on the “new” #GlobalPrivacyControl standard when it would have sufficed to update the DNT spec in relation to the new legislation.)
So while one could place reasonable confidence in @mozilla upholding past, current and future privacy-oriented standards, I don't feel the same concerning the #openWeb.
I'm sure people have different ideas about what does it mean to support the #openWeb. I think first and foremost it means allowing users (on both sides of the connection) to use the protocols and file formats of their choice. Every time a browser fails to implement (or worse decides to remove) support for a standard protocol or file format, it's failing the open web. Half-assing implementation of web standards was basically #Microsoft's staple behavior during the first #browserWar.
Microsoft had reasons for this: at first it was because they didn't “get” the Internet, later on it was because it's the only way they had to (attempt to) control it. They did all they could to cripple it: remember when #OperaSoftware released a “Bork” edition of their of the #OperaBrowser in response to #Microsoft serving them intentionally broken CSS?
https://press.opera.com/2003/02/14/opera-releases-bork-edition/
Now imagine what the Internet would have been like if Opera, @mozilla and few others hadn't held their ground.
Opera releases "Bork" edition

The Swedish Chef Goes After Microsoft Oslo, Norway – Feb 14, 2003 Two weeks ago it was revealed that Microsoft’s MSN portal targeted Opera users, by purposely providing them with a broken page. As a reply to MSN’s treatment of its users, Opera Software today released a very special Bork edition of its Opera 7 […]

Opera Newsroom
If you think what Microsoft did was insane, consider this: @Vivaldi had to change their user agent identification because #Google, #Facebook, #Microsoft and even #Netflix were intentionally breaking their websites when detecting the #VivaldiBrowser
https://vivaldi.com/blog/user-agent-changes/
#GAFAM are against the #openWeb —and the worst in the bunch is Google, that also holds a dominant position with their browser both on the desktop and mobile space.
User Agent Changes | Vivaldi Browser

Vivaldi is changing how it presents its User Agent in the upcoming release. Here is an explanation of what a User Agent is and what we are doing.

Vivaldi Browser
But the worst here isn't that #Google is actively against the #openWeb: it's that in contrast to the first #browserWar, there is really nobody left to stand up to them.
Consider for example @davew's write-up on Google's effort to deprecate #HTTP
http://this.how/googleAndHttp/
and consider that #Firefox, the only actual alternative, is also on Google's page:
https://blog.mozilla.org/security/2015/04/30/deprecating-non-secure-http/
albeit less aggressively so.
Google and HTTP

Google is a guest on the web, as we all are. Guests don't make the rules.

Under the same pretense of security, support for classic (some would say obsolete) protocols such as #FTP and #Gopher has already been removed from all major browsers. In some browsers, such as #Firefox, this has been an intentional choice. Others, like @Vivaldi have been basically forced into this position by their reliance on Google's engine.

And yes, I claim that security is just a pretense. Ad networks known to sell your data to the highest bidder and serve malware don't give a rat's ass about your security and privacy. The only thing they care about is making sure _they_ are the one getting your data, and _they_ are the one serving you the ad, even if it's malvertising.

(Firefox may not have such motives, but they definitely have an interest in reducing the code base, making maintenance easier for them.)

Honestly, the thread took a more depressing turn than planned and I'm sorry for that. The original intent was quite the opposite: to celebrate the importance of even the smallest resistance in three resistance against apparently overwhelming odds, and even when the outcome is still not really the fair, open Internet one might have been fighting for. I could go with the Ursula K. LeGuin quote against capitalism now, or that the war is only lost when we stop fighting, but I think we can do better.
I am almost surprised that nobody has pointed out yet that protocols other than #HTTP(S) are irrelevant in a discussion about the #openWeb —which would be one of those pedantic, technically correct (the best kind of correct!) observations that completely misses the point. Yes, it's technically true that the World Wide Web is built on the #HTTP protocol and the #HTML and related file formats and specifications (such as #CSS and #JavaScript). But there is no #openWeb without an #openInternet.
And one of the keys to an open anything is ease of access. And sure enough, there are still plenty of dedicated tools to access specific parts of the #Internet that are not the #WorldWideWeb: clients for FTP, gopher, finger, USENET, email, IRC or even new hypertext navigation protocols like #Gemini exist.
But why should I need a different client for each when I could access the whole Internet from a single client?
Why should I need to switch clients when following an FTP or #Gemini URL in an #HTTP-served #HTML page, or conversely when following an HTTP link from a #Gemtext page? Why shouldn't my Gemini client be able to render HTML pages delivered over the Gemini protocol, and my web browser able to render #Gemtext natively if served over HTTP?
This is why the “swiss army knife” browser model is essential to the #openInternet, and a fortiori for the #openWeb.
Instead, we're seeing a growing, grotesque separation between the a “lightweight” Internet and a “heavyweight” Internet where —ironically— the “lightweight” clients have support a wider range of protocols and metadata whereas “heavyweight” clients are gravitating towards being HTTP-only, and frequently eschewing useful metadata.
Why is it that a historical but up-to-date (latest version is from Jan 2023) textual client like Lynx can not only connect to FTP, Gopher and finger in addition to HTTP, but also presents the user with the next/prev and web feed links stored in the document head, while the most recent version of Firefox cannot do any of those things, and is likely destined to lose even more functionality in the future?
And no, the answer is *not* «ah, but Firefox has to dedicate much more resources to support the latest version of the massive, quickly-evolving HTML, CSS and JavaScript standards. The answer is not that because Firefox actually had to support for those things and actually spent resources in _removing_ them. And while for some of them (e.g. web feeds) an argument could be made that the implementation needed a rewrite, I doubt that's the case for the removed protocols.
This is frustratringly compounded in major browsers by a lack of extensibility: while it is generally possible to define _external_ protocol handlers, it's not generally possible to write handlers that would just stream the content internally.
Historical note: the much-maligned #InternetExplorer actually supported something like that:
https://textslashplain.com/2022/01/21/adding-protocol-schemes-to-chromium/
Some Qt browsers (such as Konqueror and Falkon) can also be extended using the KDE Framework KIO plugins.
Adding Protocol Schemes to Chromium

text/plain
I still remember the days when Mozilla was the king of customization. It was them who introduced the extension concept to the browser, allowing all kinds of experimentation on the UX. Many of the features we expect in a modern browser today were first introduced through XPI extensions in the Mozilla Suite of lore and the first versions of Firefox.
Now they play catch-up with whatever Chrome dictates web extensions are allowed to do, barely managing to avoid the worst:
https://www.theverge.com/2023/1/17/23559234/firefox-manifest-v3-content-ad-blocker
Firefox found a way to keep ad-blockers working with Manifest V3

After Google set off a wave of protests when its Manifest V3 system broke some ad-blockers in Chrome, Mozilla has implemented the system, but promises that it’ll still work with popular software like Ghostery and uBlock Origin.

The Verge
Again, the issue here isn't that Mozilla added support for Chrome-style web extensions to Firefox. It's that it did so removing support for “legacy” extensions. And while I'm sure there were good technical reasons why the existing implementation couldn't be kept and was holding back engine progress, like in the RSS/Live Bookmark case, I have my doubts that it could not be replaced with something more modern that still provided the same or —at worst— a similar interface.
Even assuming the new architecture is so wildly different from the previous one to make support legacy extension impossible, I find it extremely unlikely that it wouldn't be possible to design an extension interface that would allow pluggable protocol interfaces and image format support in modern browsers.
Why do smaller niche browsers have better support for these things that the mainstream ones?
@oblomov @davew Most of Firefox's revenue comes from Google to begin with.
@ocdtrekkie it does make one question if their softer demeanor now is just having lost aim or something else.
@oblomov @Vivaldi Honestly the right call at this point is probably just for every browser to just sync it's user agent string with the latest Chrome release in perpetuity, and destroy any notion the server deserves to know what agent you're using.
@ocdtrekkie the right call should be to sue any website that sends different content to different browsers out of existence, but nobody is going to do that 8-(