DHS bought a dog-like robot that it modified with an antenna array to let law enforcement overload people’s home networks to disable any “internet of things” devices, according a transcript obtained by EFF’s Dave Maass and shared with @404mediaco. https://www.404media.co/dhs-has-a-ddos-robot-to-disable-internet-of-things-booby-traps-inside-homes/
DHS Has a DoS Robot to Disable Internet of Things ‘Booby Traps’ Inside Homes

"NEO carries an onboard computer and antenna array that will allow officers the ability to create a ‘denial-of-service’ event to disable ‘Internet of Things’ devices that could potentially cause harm while entry is made."

404 Media
@eff @404mediaco erm... and also handily turns off any recording devices so they can human rights violate to their little heart's content.

@camerondotca @eff @404mediaco hardwire your cameras too.

Who would build an iot enabled booby trap? That's a weird cross section of extremely security conscious and extremely security incompetent.

@the_wiggler @eff @404mediaco The thing is a) yes b) yeah, but no everyone is going to do that c) this doesn't mitigate what this thing is REALLY going to be used for.
@camerondotca @eff @404mediaco the only thing that will mitigate the actual usage is a very well hidden and stocked bunker, so focusing on the silly "booby trap antenna" is more fun.

@camerondotca Other than disabling nearby mics, how might this be abused? I'm not fighting for police surveillance or abuse, but I'm looking for a more concrete argument.

I honestly believe that wireless devices shouldn't be used in critical systems, so wireless cameras will always have the downside of being wirelessly disabled. If anything, this article should push people to use more resiliant systems for preventing abuse. (E.g. hardwired home surveillance)

At the same time, it shows that technology increases the likelyhood of abuse and the tech should be designed in a way to minimize or prevent abuse.

@Zoarial94 wrong house, charter/constitution violations within the house if it's the right house, general wtfery over all, no?
@camerondotca I was thinking more along the lines of "what can we put on this dog to add more capabilities", but that's a fair point.
@Zoarial94 @camerondotca my guess is that the idea is preventing people from using iot devices to remote-control or automate activation of said traps. I'm sure there are already tools for jamming pure RF versions of this kind of tech, so this is kind of the equivalent for networked devices.
@the_wiggler Yes, that's what the original motivation for de-peloping this robot is. My question is what abusive or malicious acts could this same robot be used for.
@Zoarial94 @camerondotca I wonder how it would affect hearing aids...
@rbairwell @Zoarial94 networked pacemakers
@camerondotca @rbairwell @Zoarial94 Yes, intentionally generating interference like this is illegal for a reason and DHS does not get a pass for it. They will harm or kill people if they use it.
@Zoarial94 @camerondotca > At the same time, it shows that technology increases the likelyhood of abuse and the tech should be designed in a way to minimize or prevent abuse.

Only if you ignore all the abuse prior to mass computerization and how little of it managed to get caught on tape.

But otherwise yes. Conventional consumer wireless is for shit you generally don't care about and for which downtime is acceptable.

Otherwise you have the choices of hardwiring (ideally with fiber, completely unaffected by electromagnetic interference) or custom wireless & FSO communication systems.
@lispi314 I'm not ignoring previous abuse. Before the computerization of everything, there was still abuse. Nowaday, it is possible for our own computers can aid in that same abuse. That is the point I'm trying to make.

@lispi314 DOSing wireless cameras doesn't necessarily directly aid abusers. It might make abuse easier, but it's the tools used to take down a camera that are directly aiding them.

Allowing abusers to tap in and view said cameras does aid abuse directly.

One of these scenarios has no benefit to a homeowner. And the other has a net negative effect. It is very situation dependent. My point it, we need tech to work for us. Or at least, not work against us.

@Zoarial94 ...Yeah but literally any tool that increases capacity for literally anything can be misused. (I suppose you could squint that into being an argument that every tool facilitates abuse.)

Every tool is fundamentally dual use (which is why dual-use technologies regulation is bullshit, everything is dual-use and failure to figure out such a way to misuse a tool or object is simply an indication of a lack of imagination).

I also consequently do not think it is possible to design a technology to prevent its abuse. It is possible to make that not be its primary use, and to make its primary use when access isn't explicitly given to abusers safe, but guaranteeing anything more doesn't seem possible to me.

@lispi314 I disagree with your point that you can't prevent a technology from being abused. Look at zero-trust principals and encryption. Look at the steps journalists take to protect themselves.

Look at (some of) the steps Apple has taken to protect their users. Adding more Encryption to iCloud.

Look at Google Pixels:.
Hardware rate-limited unlock attempts.

Both iOS and Android: full disk encryption. Permission sandboxing. Camera and mic in-use lights (the new macbooks have this in the DISPLAY HARDWARE!)

Companies also fix vulnerabilities to prevent abuse from bad actors or bad-faith abusers.

These technologies prevent abuse. They're certainly not perfect at the start, but they are designed to prevent abuse. That's not to say certain features can't be abused (photo scanning).

We need to fight for technology that works in our favor. It's always been a tradeoff between convenience and security, but we can make the line less and less noticeable.

@lispi314 Wired cameras which record to an encrypted disk are much harder to abuse than unencrypted wireless cameras that share their footage with a company that hands over that same footage to the police when they ask.
@Zoarial94 Indeed, proprietary malware devices that share privileged information with abusers and undesirable third parties cannot be trusted.
@Zoarial94 All of those examples are predicated on making them less practical to abuse than other options.

Zero-trust stuff is usually still readily abusable by physically breaking into places or resorting to various forms of coercion.

Google Pixels have a built-in permanent DoS? That's abusable in itself if you time things and come up with an appropriate scenario.

Android's FDE is broken in a number of ways and that only helps if the device is stolen when off or by an unskilled abuser. I also wouldn't trust any of Apple's crypto without being able to audit their code.

Hardware bypasses are possible, it's also possible to eavesdrop based on accelerometers among other sensors. It is potentially possible to add a film on the phone itself that refracts or otherwise interferes with the light in such a way as to prevent the user from being able to notice without careful inspection.

> Companies also fix vulnerabilities to prevent abuse from bad actors or bad-faith abusers.

That is part of not explicitly giving abusers access.

> These technologies prevent abuse.

Mitigate it at best, enable new forms at worse.

> We need to fight for technology that works in our favor. It's always been a tradeoff between convenience and security, but we can make the line less and less noticeable.

So as I said, we have a choice of its primary use and how effective and safe it is in doing so.

Ultimately most security is sufficient inconvenience of attack.

@lispi314 That's exactly the idea and I think you're conflating two different ideas. I'm also not being clear when I say "prevents abuse".

I only have a deadbolt on my door. This prevents people from walking in, but it doesn't prevent someone from smashing the door to bits. It prevents people who jiggle the handle from getting it. It prevents abuse.

The motivation behind any secure system is that you protect against some threat model. For nearly anyone, basic protection is enough. When I say "Preventing abuse", I mean it raises the bar.

When you're up against a 3 letter agency... Yes, there's a lot you have to consider. It's also a relatively small population in reality.

There are technologies that can nearly prevent any abuse, but they come at some cost to convenience. I run GrapheneOS, so you'd be hard pressed to find anyone who could get into my phone (without breaking my kneecaps).

While you probably can't prevent all abuse, you can prevent abuse.

@the_wiggler with how accessable a lot of IoT technology is, this is about making it harder for people to use that tech for these traps. It's about raising the bar again after it had been lowered.

At first I thought this was a terrible idea, but honestly I think there's a decent point to be made in support of this. If it goes further than DDOS, I think there should definitely be more pushback.

For me, it's hard to see how this might be abused, but I also don't have any wireless IoT devices in my home. I don't like these devices because of the security holes anyway.

@the_wiggler I should clarify I'm not in support of this robot, but I think it'll be much harder to prevent a rollout compared to strapping a gun to a robot dog.

@Zoarial94 preventing rollout is impossible. Even if it was illegal they'd do it anyway and then get taxpayers to pay the associated fines.

The point is that's its wierd to be so paranoid to have booby traps, but not so paranoid to have realized that iot tech is extremely susceptible to hacking.

@Zoarial94 The accessibility of the tech is irrelevant. It's *always* been a bad choice if you care about security, regardless of application.

@the_wiggler Good point. I guess I didn't consider hacking as an option for considering there are companies that just hand over access to your cameras if you ask. (If you're using wireless IoT for traps you're probably also **likely** to use Ring since you don't know better. [Big generalization])

The accessability of IoT devices is the reason they are developing this robot. Without such easy access to this tech, they would have no reason to develop a general purpose IoT-disabling robot.

@the_wiggler This point is exactly why I'm hesitant to say "seems fine to me" in regards to this robot. Yes, anyone who does any research should know not to trust wireless comms. Dumb criminals will use the wireless stuff, so why not let the gov have the robot?.

Because there are many, many homeowners who just don't know or understand the risks of the tech they're installing in their home. Those people could get caught up in the crosshairs malicous or not. I want to figure out what specifically we could see abused.

@Zoarial94 oh, it's certainly not fine. I think it's total garbage and probably more or less pointless, but the military industrial complex chugs along.

I don't think the average person has too much to worry about via a straight DoS attack. Even if it caused every device to crash, the vast majority of people aren't automating anything that's going to blow up if disconnected. The biggest impact is on legitimate surveillance of government operations.