New army🪖 dataset asserting that LLMs answer on violence, terrorism, and war topics

Military experts manually created challenges. To their dismay, models often did not answer.

Until they steered them

It is definitely novel. What do you say, wonderful research? Horrible? Why?

Regarding the steering. ​They do a very classic safety procedure.
Identify harmful and harmless examples. calculate the average activation difference, and then...
They just apply it towards the harmful one
#AI
I find it an expected but thought-provoking point. Most safety methods are post hoc ways to change a model, so they provide weak safety, one can easily apply them the opposite way.
Paper link: https://arxiv.org/abs/2603.10012
Measuring and Eliminating Refusals in Military Large Language Models

Military Large Language Models (LLMs) must provide accurate information to the warfighter in time-critical and dangerous situations. However, today's LLMs are imbued with safety behaviors that cause the LLM to refuse many legitimate queries in the military domain, particularly those related to violence, terrorism, or military technology. Our gold benchmark for assessing refusal rates, which was developed by veterans of the US Army and special forces, is to our knowledge the first dataset of its kind. We present results for refusal and deflection rates on 31 public models and 3 military models. We observe hard rejection rates as high as 98.2% and soft deflection rates ranging from 0% to 21.3%. We also present results on two additional synthetic datasets and show their correlations with the gold dataset. Finally, we perform abliteration using the Heretic library on a military-tuned gpt-oss-20b model, showing an absolute increase in answer rate of 66.5 points but an average relative decrease of 2% on other military tasks. In our concluding remarks, we argue for deeper specialization, including with mid-training and end-to-end post-training, to achieve zero refusals and maximum military task accuracy for closed military models.

arXiv.org