I use Frigate and HomeAssistant, they are on different hosts and the only port allowed from Frigate to HomeAssistant is the non-auth api port. For normal users using Frigate, I use an Oauth2-proxy instance on the same host (same compose) as Frigate tied to a third host with keycloak. go2rtc is on the Frigate host, but it only talks to Frigate and the cameras themselves. You can also access go2rtc from outside if you want to access the streams directly but your HomeAssistant does not need too. I find that this is better than the cameras directly as the processing is not really meant for a whole bunch of streams at once.

I followed docs for the HomeAssistant to Frigate stuff with the GIF notifications and it is working fine. I also use the Frigate integration (using HACS) so maybe there is a lot done for me.

You seem a bit more network savvy than me. All I could figure is the Frigate integration (also HACS for me) talks to Frigate and asks it where to get the video from. If go2rtc is enabled in Frigate, the integration tries to stream from go2rtc. Without my Docker stack being in host network mode, it wouldn’t work for me.

With no go2rtc, the Frigate integration asks Frigate where to get the stream, and it’s told to get it from the camera from what I can tell.

All just guesses on my end. Hopefully I don’t sound too sure of myself because I’m not really sure.

Yeah you probably need to pass the tpu to the VM directly. But openvino on CPU has been just fine for me.

Although I’ve noticed in 0.17, it’s started complaining that ov takes a long time, with an absudly large value in ms. Nothing seems to be broken, and restarting the container clears it.

I’ll keep an eye out for that. So far the Inference Speed is holding stead at 8.47ms.

I checked and technically it’s on the GPU, but it’s Intel integrated graphics (i7-11700T). I don’t have a separate GPU in that system. Everything seems to work fine, even when it’s complaining about speed.

It might also be due to these being USB cameras (long story) and if the stream drops, ffmpeg crashes and restarts.

That CPU has UHD Graphics 750 which is newer than mine which has 730. Should work quite nicely.
Yes, and for extra fun I’m running this in docker on Debian in a VM, with the GPU function of the cpu passed through to Debian. I migrated from VMware years ago and never bothered trying proxmox containers.
I also have frigate on proxmox with a Google coral but mine has been rock solid. The only difference is that I use an LXC instead of a VM. I recall there being more issues passing hardware to VMs in Proxmox since they don’t like to share.
I have used frigate on bare metal using a Google coral with a E key M.2. then I moved it to a Synology + a USB coral. On both systems I have never had any issues with the coral. If I have to nitpick, I would be that frigate would trigger persons incorrectly. Like a blanket or my couch.

That’s good to hear. That reinforces my suspicion that my problems were caused by passing it through to the virtual machine using Proxmox.

You might be interested in trying to enable the YOLOv9 models. The developer claims they are more accurate, and so far I’m tempted to agree.

I don’t have too much processing power unless I move my frigate to my proxmox. Let me know if you do and it’s better. I might make the switch!

I’m in a similar situation, I have a coral tpu, but I’ve switched to openvino. And I see fewer false positives as well.

I suspect frigate devs aren’t working as hard on keeping the coral working with their ML models. Also, that coral driver is pretty stale; it’s from the 2014-era of google maps blurring car license plates.

I have dual Coral m.2 on Proxmox with Frigate in LXC, never had to restart.

Thanks for the write up!

Sounds like LXC is the way to go to pass a Coral through. Not sure why it’s so flaky with the Debian VM.

I’ve been trying to configure frigate for a few days now and I’ve got it all working via restreaming through go2rtc because the WiFi cameras I have only allow a limited amount of connections and I can view my cameras just fine in the portal. But I gave up trying to add them to home assistant because no matter what I did, I would only get a still image.

My setup seems the same as yours. (Frigate in docker via proxmox LXC) But I don’t have any external devices, just using the cpu of my server.

Would it be possible to see your config file for this? I’m having a hard time understanding how you removed go2rtc. Also, are you using substreams at all?

I don’t have an external GPU either, just the onboard Intel graphics is what I use now. Also worth mentioning to use integrated graphics your Docker Compose needs:

devices: - /dev/dri/renderD128:/dev/dri/renderD128

I’m not using substreams. I have 2 cameras and the motion detection doesn’t stress the CPU too much. If I add more cameras I’d consider using substreams for motion detection to reduce the load.

Your still frames in Home Assistant are the exact problem I was having. If your cameras really do need go2rtc to reduce connections (my wifi camera doesn’t seem to care), you might try changing your Docker container to network_mode: host and see if that fixes it.

Here’s my config. Most of the notations were put there by Frigate and I’ve de-identified everything. Notice at the bottom go2rtc is all commented out, so if I want to add it back in I can just remove the #s. Hope it helps.

config.yaml

mqtt: enabled: true host: <ip of Home Assistant> port: 1883 topic_prefix: frigate client_id: frigate user: mqtt username password: mqtt password stats_interval: 60 qos: 0 cameras: # No cameras defined, UI wizard should be used baby_cam: enabled: true friendly_name: Baby Cam ffmpeg: inputs: - path: rtsp://user:pw@<ip-addr>:554/cam/realmonitor?channel=1&subtype=0&unicast=true&proto=Onvif roles: - detect - record hwaccel_args: preset-vaapi detect: enabled: true # <---- disable detection until you have a working camera feed width: 1920 # <---- update for your camera's resolution height: 1080 # <---- update for your camera's resolution record: enabled: true continuous: days: 150 sync_recordings: true alerts: retain: days: 150 mode: all detections: retain: days: 150 mode: all snapshots: enabled: true motion: mask: 0.691,0.015,0.693,0.089,0.965,0.093,0.962,0.019 threshold: 14 contour_area: 20 improve_contrast: true objects: track: - person - cat - dog - toothbrush - train front_cam: enabled: true friendly_name: Front Cam ffmpeg: inputs: - path: rtsp://user:pw@<ip-addr>:554/cam/realmonitor?channel=1&subtype=0&unicast=true&proto=Onvif roles: - detect - record hwaccel_args: preset-vaapi detect: enabled: true # <---- disable detection until you have a working camera feed width: 2688 # <---- update for your camera's resolution height: 1512 # <---- update for your camera's resolution record: enabled: true continuous: days: 150 sync_recordings: true alerts: retain: days: 150 mode: all detections: retain: days: 150 mode: all snapshots: enabled: true motion: mask: - 0.765,0.003,0.765,0.047,0.996,0.048,0.992,0.002 - 0.627,0.998,0.619,0.853,0.649,0.763,0.713,0.69,0.767,0.676,0.819,0.707,0.839,0.766,0.869,0.825,0.889,0.87,0.89,0.956,0.882,1 - 0.29,0,0.305,0.252,0.786,0.379,1,0.496,0.962,0.237,0.925,0.114,0.879,0 - 0,0,0,0.33,0.295,0.259,0.289,0 threshold: 30 contour_area: 10 improve_contrast: true objects: track: - person - cat - dog - car - bicycle - motorcycle - airplane - boat - bird - horse - sheep - cow - elephant - bear - zebra - giraffe - skis - sports ball - kite - baseball bat - skateboard - surfboard - tennis racket filters: car: mask: - 0.308,0.254,0.516,0.363,0.69,0.445,0.769,0.522,0.903,0.614,1,0.507,1,0,0.294,0.003 - 0,0.381,0.29,0.377,0.284,0,0,0 zones: Main_Zone: coordinates: 0,0,0,1,1,1,1,0 loitering_time: 0 detectors: # <---- add detectors ov: type: openvino device: GPU model: model_type: yolo-generic width: 320 # <--- should match the imgsize set during model export height: 320 # <--- should match the imgsize set during model export input_tensor: nchw input_dtype: float path: /config/model_cache/yolov9-t-320.onnx labelmap_path: /labelmap/coco-80.txt version: 0.17-0 #go2rtc: # streams: # front_cam: # - ffmpeg:rtsp://user:pw@<ip-addr>:554/cam/realmonitor?channel=1&subtype=0&unicast=true&proto=Onvif # baby_cam: # - ffmpeg:rtsp://user:pw@<ip-addr>:554/cam/realmonitor?channel=1&subtype=0&unicast=true&proto=Onvif

I followed your steps for removing go2rtc and It fixed my issues in home assistant. I was a little worried about the results because I’m running 4 WiFi cameras and 1 dual camera (tapo c240d) and thought it might be too much for my setup, but everything works perfectly fine. If anything, it works better now because I can have my cameras in home assistant using the advance camera card. It also seemed to fix my issue with not being able to view clips from the C240d. Not sure how or why, but the clip review just works now.

I haven’t tried switching my detection model yet, but that’s my next goal. Thank you for helping me with this.

“Maybe because I run Frigate in a virtual machine in Proxmox, so the Coral has to be passed through to the VM? Not sure.” FreeBSD + linux VM under Bhyve . Never had problems.

It’s a bit buried in the documentation, but

Frigate summed up perfectly in a single statement.

Wonderful application, awfully convoluted documentation.

Agreed.

However, I find their “Ask AI” LLM helper to actually be helpful for this. It’s linked at the bottom of the page on docs.frigate.video, and IMHO is actually one of the few LLMs that answers fairly accurately about items in the documentation.

That’s one hell of a positive review.

Sooo much better than projects with no documentation