also don't use brave or vanilla Firefox
also don't use brave or vanilla Firefox
Its an open source tool to download youtube videos
About every mainstream youtube download program you or your parents have ever used are actually just a wrapper for this.
Bonus: If you want to learn more about coding its not that hard to make a script that automatically downloads the last video from a list of channels that runs on a schedule. Even ai can do it.
It’s a command line tool. You type in “yt-dlp” followed by the url of a video, and it does the rest.
It has many other options, but the defaults are good enough for most cases.
There is no single stop for a tutorial for stuff like this because you could use any scripting language and which ones you have available may depend on your os.
But honestly any half decent llm can generate something that works for your specific case.
If you really want to avoid using those,
Here is a simple example for windows powershell.
# yt-dlp Channel Downloader # -------------------------- # Downloads the latest video from each channel in channels.txt # # Setup: # 1. Install yt-dlp: winget install yt-dlp # 2. Install ffmpeg: winget install ffmpeg # 3. Create channels.txt next to this script, one URL per line: # https://www.youtube.com/@SomeChannel # https://www.youtube.com/@AnotherChannel # 4. Right-click this file → Run with PowerShell # Read each line, skip blanks and comments (#) foreach ($url in Get-Content ".\channels.txt") { $url = $url.Trim() if ($url -eq "" -or $url.StartsWith("#")) { continue } Write-Host "`nDownloading latest from: $url" yt-dlp --playlist-items 1 --merge-output-format mp4 --no-overwrites ` -o "downloads\%(channel)s\%(title)s.%(ext)s" $url } Write-Host "`nDone."And here is my own bash script (linux) which has only gotten bigger with more customization over the years.
(part 1, part 2 in the next reply)
#!/bin/bash # ============================================================================ # yt-dlp Channel Downloader (Bash) # ============================================================================ # # Automatically downloads new videos from a list of YouTube channels. # # Features: # - Checks RSS feeds first to avoid unnecessary yt-dlp calls # - Skips livestreams, premieres, shorts, and members-only content # - Two-pass download: tries best quality first, falls back to 720p # if the file exceeds the size limit # - Maintains per-channel archive and skip files so nothing is # re-downloaded or re-checked # - Embeds thumbnails and metadata into the final .mp4 # - Logs errors with timestamps # # Requirements: # - yt-dlp (https://github.com/yt-dlp/yt-dlp) # - ffmpeg (for merging video+audio and thumbnail embedding) # - curl (for RSS feed fetching) # - A SOCKS5 proxy on 127.0.0.1:40000 (remove --proxy flags if not needed) # # Channel list format (Channels.txt): # The file uses a simple key=value block per channel, separated by blank # lines. Each block has four fields: # # Cat=Gaming # Name=SomeChannel # VidLimit=5 # URL=https://www.youtube.com/channel/UCxxxxxxxxxxxxxxxxxx # # Cat Category label (currently unused in paths, available for sorting) # Name Short name used for filenames and archive tracking # VidLimit How many recent videos to consider per run ("ALL" for no limit) # URL Full YouTube channel URL (must contain the UC... channel ID) # # ============================================================================ export PATH=$PATH:/usr/local/bin # --- Configuration ---------------------------------------------------------- # Change these to match your environment. SCRIPT_DIR="/path/to/script" # Folder containing this script and Channels.txt ERROR_LOG="$SCRIPT_DIR/download_errors.log" DOWNLOAD_DIR="/path/to/downloads" # Where videos are saved MAX_FILESIZE="5G" # Max file size before falling back to lower quality PROXY="socks5://127.0.0.1:40000" # SOCKS5 proxy (remove --proxy flags if unused) # --- End of configuration --------------------------------------------------- cd "$SCRIPT_DIR" # ============================================================================ # log_error - Append or update an error entry in the error log # ============================================================================ # If an entry with the same message (ignoring timestamp) already exists, # it replaces it so the log doesn't fill up with duplicates. # # Usage: log_error "[2025-01-01 12:00:00] ChannelName - URL: ERROR message" log_error() { local entry="$1" # Strip the timestamp prefix to get a stable key for deduplication local key=$(echo "$entry" | sed 's/^\[[0-9-]* [0-9:]*\] //') local tmp_log=$(mktemp) if [[ -f "$ERROR_LOG" ]]; then grep -vF "$key" "$ERROR_LOG" > "$tmp_log" fi echo "$entry" >> "$tmp_log" mv "$tmp_log" "$ERROR_LOG" } # ============================================================================ # Parse Channels.txt # ============================================================================ # awk reads the key=value blocks and outputs one line per channel: # Category Name VidLimit URL # The while loop then processes each channel. awk -F'=' ' /^Cat/ {Cat=$2} /^Name/ {Name=$2} /^VidLimit/ {VidLimit=$2} /^URL/ {URL=$2; print Cat, Name, VidLimit, URL} ' "$SCRIPT_DIR/Channels.txt" | while read -r Cat Name VidLimit URL; do archive_file="$SCRIPT_DIR/DLarchive$Name.txt" # Tracks successfully downloaded video IDs skip_file="$SCRIPT_DIR/DLskip$Name.txt" # Tracks IDs to permanently ignore mkdir -p "$DOWNLOAD_DIR" # ======================================================================== # Step 1: Check the RSS feed for new videos # ======================================================================== # YouTube provides an RSS feed per channel at a predictable URL. # Checking this is much faster than calling yt-dlp, so we use it # as a quick "anything new?" test. # Extract the channel ID (starts with UC) from the URL channel_id=$(echo "$URL" | grep -oP 'UC[a-zA-Z0-9_-]+') rss_url="https://www.youtube.com/feeds/videos.xml?channel_id=%24channel_id" # Fetch the feed and pull out all video IDs new_videos=$(curl -s --proxy "$PROXY" "$rss_url" | \ grep -oP '(?<=<yt:videoId>)[^<]+') if [[ -z "$new_videos" ]]; then echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] RSS fetch failed or empty, skipping" continue fi # Compare RSS video IDs against archive and skip files. # If every ID is already known, there's nothing to do. has_new=false while IFS= read -r vid_id; do in_archive=false in_skip=false [[ -f "$archive_file" ]] && grep -q "youtube $vid_id" "$archive_file" && in_archive=true [[ -f "$skip_file" ]] && grep -q "youtube $vid_id" "$skip_file" && in_skip=true if [[ "$in_archive" == false && "$in_skip" == false ]]; then has_new=true break fi done <<< "$new_videos" if [[ "$has_new" == false ]]; then echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] No new videos, skipping" continue fi echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] New videos found, processing" # ======================================================================== # Step 2: Build shared option arrays # ======================================================================== # Playlist limit: restrict how many recent videos yt-dlp considers playlist_limit=() if [[ $VidLimit != "ALL" ]]; then playlist_limit=(--playlist-end "$VidLimit") fi # Options used during --simulate (dry-run) passes sim_base=( --proxy "$PROXY" --extractor-args "youtube:player-client=default,-tv_simply" --simulate "${playlist_limit[@]}" ) # Options used during actual downloads common_opts=( --proxy "$PROXY" --download-archive "$archive_file" --extractor-args "youtube:player-client=default,-tv_simply" --write-thumbnail --convert-thumbnails jpg --add-metadata --embed-thumbnail --merge-output-format mp4 --output "$DOWNLOAD_DIR/${Name} - %(title)s.%(ext)s" "${playlist_limit[@]}" ) # ======================================================================== # Step 3: Pre-pass — identify and skip filtered content # ======================================================================== # Runs yt-dlp in simulate mode twice: # 1. Get ALL video IDs in the playlist window # 2. Get only IDs that pass the match-filter (no live, no shorts) # Any ID in (1) but not in (2) gets added to the skip file so future # runs don't waste time on them. echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Pre-pass: identifying filtered videos (live/shorts)" all_ids=$(yt-dlp "${sim_base[@]}" --print "%(id)s" "$URL" 2>/dev/null) passing_ids=$(yt-dlp "${sim_base[@]}" \ --match-filter "!is_live & !was_live & original_url!*=/shorts/" \ --print "%(id)s" "$URL" 2>/dev/null) while IFS= read -r vid_id; do [[ -z "$vid_id" ]] && continue grep -q "youtube $vid_id" "$archive_file" 2>/dev/null && continue grep -q "youtube $vid_id" "$skip_file" 2>/dev/null && continue if ! echo "$passing_ids" | grep -q "^${vid_id}$"; then echo "youtube $vid_id" >> "$skip_file" echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Added $vid_id to skip file (live/short/filtered)" fi done <<< "$all_ids"part 2
# ======================================================================== # Step 4 (Pass 1): Download at best quality, with a size cap # ======================================================================== # Tries: best AVC1 video + best M4A audio → merged into .mp4 # If a video exceeds MAX_FILESIZE, its ID is saved for the fallback pass. # Members-only and premiere errors cause the video to be permanently skipped. echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Pass 1: best quality under $MAX_FILESIZE" yt-dlp \ "${common_opts[@]}" \ --match-filter "!is_live & !was_live & original_url!*=/shorts/" \ --max-filesize "$MAX_FILESIZE" \ --format "bestvideo[vcodec^=avc1]+bestaudio[ext=m4a]/best[ext=mp4]/best" \ "$URL" 2>&1 | while IFS= read -r line; do echo "$line" if echo "$line" | grep -q "^ERROR:"; then # Too large → save ID for pass 2 if echo "$line" | grep -qi "larger than max-filesize"; then vid_id=$(echo "$line" | grep -oP '(?<=\[youtube\] )[a-zA-Z0-9_-]{11}') [[ -n "$vid_id" ]] && echo "$vid_id" >> "$SCRIPT_DIR/.size_failed_$Name" # Permanently unavailable → skip forever elif echo "$line" | grep -qE "members only|Join this channel|This live event|premiere"; then vid_id=$(echo "$line" | grep -oP '(?<=\[youtube\] )[a-zA-Z0-9_-]{11}') if [[ -n "$vid_id" ]]; then if ! grep -q "youtube $vid_id" "$skip_file" 2>/dev/null; then echo "youtube $vid_id" >> "$skip_file" echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Added $vid_id to skip file (permanent failure)" fi fi fi log_error "[$(date '+%Y-%m-%d %H:%M:%S')] ${Name} - ${URL}: $line" fi done # ======================================================================== # Step 5 (Pass 2): Retry oversized videos at lower quality # ======================================================================== # For any video that exceeded MAX_FILESIZE in pass 1, retry at 720p max. # If it's STILL too large, log the actual size and skip permanently. if [[ -f "$SCRIPT_DIR/.size_failed_$Name" ]]; then echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Pass 2: lower quality fallback for oversized videos" while IFS= read -r vid_id; do [[ -z "$vid_id" ]] && continue echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Retrying $vid_id at 720p max" yt-dlp \ --proxy "$PROXY" \ --download-archive "$archive_file" \ --extractor-args "youtube:player-client=default,-tv_simply" \ --write-thumbnail \ --convert-thumbnails jpg \ --add-metadata \ --embed-thumbnail \ --merge-output-format mp4 \ --max-filesize "$MAX_FILESIZE" \ --format "bestvideo[vcodec^=avc1][height<=720]+bestaudio[ext=m4a]/bestvideo[height<=720]+bestaudio[ext=m4a]/best[height<=720]/worst" \ --output "$DOWNLOAD_DIR/${Name} - %(title)s.%(ext)s" \ "https://www.youtube.com/watch?v=%24vid_id" 2>&1 | while IFS= read -r line; do echo "$line" if echo "$line" | grep -q "^ERROR:"; then # Still too large even at 720p — give up and log the size if echo "$line" | grep -qi "larger than max-filesize"; then filesize_info=$(yt-dlp \ --proxy "$PROXY" \ --extractor-args "youtube:player-client=default,-tv_simply" \ --simulate \ --print "%(filesize,filesize_approx)s" \ "https://www.youtube.com/watch?v=%24vid_id" 2>/dev/null) if [[ "$filesize_info" =~ ^[0-9]+$ ]]; then filesize_gb=$(echo "scale=1; $filesize_info / 1073741824" | bc) size_str="${filesize_gb}GB" else size_str="unknown size" fi if ! grep -q "youtube $vid_id" "$skip_file" 2>/dev/null; then echo "youtube $vid_id" >> "$skip_file" log_error "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Skipped $vid_id - still over $MAX_FILESIZE at 720p ($size_str)" fi fi log_error "[$(date '+%Y-%m-%d %H:%M:%S')] ${Name} - ${URL}: $line" fi done done < "$SCRIPT_DIR/.size_failed_$Name" rm -f "$SCRIPT_DIR/.size_failed_$Name" else echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$Name] Pass 2: no oversized videos to retry" fi # Clean up any stray .description files yt-dlp may have left behind find "$DOWNLOAD_DIR" -name "${Name} - *.description" -type f -delete doneI see. I am not a programmer, not by a long shot. More on the grandma side of things instead. So please forgive if I’m saying something very stupid - I’m just ignorant.
I’ve been happy with NewPipe so far, 95% of my video watching happens on my phone. The only thing Newpipe can’t do is access age restricted videos. If this tool can do that on my phone, then I’m definitely interested.
Yes and no,
Yes because i am doing it, no because it’s just one part of the process.
Newpipe is cool but it doesn’t run on my phone so i needed something else.
You may have heard of plex, “run your own netflix”, i much prefer its competitor jellyfin but that doesn’t matter here.
Point is i download my YouTube videos on a schedule/script straight to the library folder of jellyfin, from which i can login from any type of device.
on ios you can also use firefox focus, it doesn’t have ads on youtube, but iirc you can’t stay logged in because it doesn’t save cookies (tho that could be a positive depending on how you look at it)
vivaldi ios also didn’t have ads on youtube, but it’s been a while since i used it so it may have changed and it’s a pretty heavy browser in my experience
orion also supports firefox/chrome extensions but in my experience it’s adblocking (even with ublock) isn’t perfect. but again, it’s been a while so maybe it’s better now
Mozilla Firefox isn’t much better. They have similar links to shady people, often the same shady people… That includes two friends of Jeffrey Epstein.
And Mozilla still engages in discrimination today.
From the linked document, describing an unneeded round of layoffs:
People from groups underrepresented in technology, like female leaders and persons of color, were disproportionately impacted by the [Mozilla’s] layoff.
Mozilla Firefox isn’t much better. They have similar links to shady people, often the same shady people… That includes two friends of Jeffrey Epstein.
And Mozilla still engages in discrimination today.
From the linked document, describing an unneeded round of layoffs:
People from groups underrepresented in technology, like female leaders and persons of color, were disproportionately impacted by the [Mozilla’s] layoff.
Firefox has some very good forks including Waterfox (pretty normal) and LibreWolf (pretty privacy-hardened out of the box and may require a little Settings menu tweaking to make normal).
It’s unfortunate, but at the end of the day you kind of have to bite the bullet and accept that you will be using something downstream of something bad, e.g. Google (Chrome forks) or their money (Firefox is funded not by donations but by them).
I haven’t found one that blocks YouTube ads as well as brave does on iPhone.
Yes, I do have uBlock
I’ve been trying to pirate music for my Navidrome, and the age verification is quite literally making it impossible to download some songs.
Thankfully some kind soul ([email protected]) a few days ago told me about monochrome.tf which provides files in better format anyway, so as long as the song is by an artist or band (and not an unpopular game OST 😭) it will probably be on there. I guess it’s built on Tidal.
If youre on linux, give Phoenix for Firefox a shot. It installs a bunch of enterprise rules to harden Firefox, so youre always on the latest security patches but never with AI/Telemetry bs.
Librewolf is also pretty good, but i mention phoenix because it is vanilla firefox
yt-dlp is great for downloading media you’ve already found (or at least, playlists or creator channels you’ve already found), but you can’t use it for discovering new media. You still need a browser or GUI app like FreeTube or Newpipe for that, and it works better when you’re actually signed in with your Google account so that the recommendation algorithm works and it can keep track of what you watched for you.
Don’t get me wrong; I would love to limit my interaction with Google to anonymously fetching video URLs. But none of the alternatives sync my watch history between devices or recommend new videos (beyond just new uploads from subscribed channels) to me.
Agreed. It’s actually the only streaming service I subscribe to because I get so much use out of it and at least some of that money goes to actual creators. Plus, YouTube Music has an insane library of obscure shit.
Granted, I still use a few of those plugins, to improve the experience further.