GOG is seeking a Senior Software Engineer with C++ experience to modernize the GOG GALAXY desktop client and spearhead its Linux development
GOG is seeking a Senior Software Engineer with C++ experience to modernize the GOG GALAXY desktop client and spearhead its Linux development
there’s a lot to be excited for, but
Job requirements
[…]
ew.
Yeah, what does GOG know?
The real source of wisdom is social media users who approach a topic with bad faith, outrage farming framing. I mean just look at the upvotes, and you can easily tell how right you are, it’s basically science.
And we open the book of troll arguments to chapter 1: Ad hominem
Keep going, it really makes you look like the rational one.
Maybe try a red herring next, or a straw man those are always popular.
Oh ok.
‘The job listing does not say anything about outsourcing your brain.’
But, everyone knows that because it is obvious on the face.
The subtext, as always, isn’t about commenting on the subject of the article or even making any kind of cognizant point that could actually be rebutted. Much like the top comment, it is just running ‘ai bad’ through an LLM so that it fits the post.
Would you honestly say that the comment that I responded to was made in good faith?
Why are you so interested in defending trolls?
The irony here is rich.
Maybe. We can’t say, there is zero information there that even hints at how or how much they use AI.
It isn’t like they’re saying something specific like ‘Must be able to use Cursor, Mercurial and be able to direct multi-agent workflows’.
That bullet point read like it is more there to include a hot keyword on job searching sites than an actual specification that describes the job.
It’s kind of like including the word in your comment, so that you grab all of the bot upvotes and can farm outrage in a way that is objectively off-topic and unrelated to the actual post, which is about GOG moving to support Linux, not and not about AI.
It’d be one thing if there was something specific about the job related to AI, or if anyone involved in these comments had actually said anything of substance other than, literally, ‘ew’.
So, to my pattern recognition, this looks like every other ‘ai bad’ thread full of toxic attacks and light on actual discussion.
I haven’t met a lot of people who actually understood machine learning that say things like LLMs ‘a known scam’.
I agree that the industry, is massively overhyping the future capabilities of this kind of software in order to maintain their valuations… but the framing that AI (neural network-based machine learning) is useless is social media brain rot, not an accurate survey of the state of machine learning.
It’s a publicly traded company, isn’t it? Most likely there is some investor in the CEO’s ear asking him to push this down on all staff… so they come up with bright ideas like putting silly “requirements” like this in their job descriptions as well. And in any case, AI investors are so desperate these days, chances are that they’re doing everything they can to create general LLM FOMO in a similarly desperate push to increase adoption.
That’s what I’m guessing at least. Even to me it sounds a little like a conspiracy theory, but then again these people have a lot of influence.
The future looks to involve a mixture of AI and traditional development. There are things I do with AI that I could never touch the speed of with traditional development. But the vast majority of dev work is just traditional methods with maybe an AI rubber duck and then review before opening the PR to catch the dumb mistakes we all make sometimes. There is a massive difference between a one-off maintenance script or functional skeleton and enterprise code that has been fucked up for 15 years and the AI is never going to understand why you can’t just do the normal best practice thing.
A good developer will be familiar enough with AI to know the difference, but it’ll be a tool they use a couple times a month (highly dependent on the job) in big ways and maybe daily in insignificant ways if they choose.
Companies want a staff prepared for that state, not dragging their heels because they refuse to learn. I’ve been at this for thirty year’s and I’ve had to adapt to a number of changes I didn’t like. But like a lot of job skills we’ve had to develop over the years — such as devops — it’ll be something that you engage for specific purposes, not the whole job.
Even when the AI bubble does burst, AI won’t go away entirely. OpenAI isn’t the only provider and local AI is continuing to close the gap in terms of capability and hardware. In that environment, it may become even more important to know when the tool is a good fit and when it isn’t.
I am aware of that. I occasionally use AI for coding myself if I see fit.
Just the fact that active use of AI tools is listed under job requirement and that I have seen that in more than a few job listings rubs me the wrong way and would definitively be the first question in the interview to clarify what the extent of that is. I just don’t wanna deal with pipelines that break because they are partially rely on AI or an code base nobody knows their way around because nobody actually has written it themselves.
Frankly that’s why I think it’s important for AI centrists to occupy these roles rather than those who are all in. I’m excited about AI and happy to apply it where it makes sense and also very aware of its limitations. And in the part of my role that is encouraging AI adoption, critical thinking is one of the things I try my hardest to communicate.
My leadership is targeting 40-60% efficiency gains. I’m targeting 5-10% with an upward trajectory as we identify the kinds of tasks it is specifically good at within this environment. I expressed mild skepticism about that target to my direct manager during my interview (and he agreed) but also a willingness to do my best and a proven track record of using AI successfully.
I would suggest someone like yourself is perhaps well-suited to that particular duty — though whether the hiring manager sees it that way is another issue.