Sooo #google. Nimm das. #Nextcloud und #photoprism auf einem raspi5 mit 4GB RAM und ein bisschen Unterstützung und gute Ideen von Claude sind wie Google mit Nostalgiefaktor. So Long s****rs!
Aber läuft. #didit #dutgemacht

The latest PhotoPrism update introduces enhanced Ollama configuration options, improved security, and multiple bug fixes related to indexing, folders, and metadata.
https://linuxiac.com/photoprism-ai-powered-photos-app-brings-better-ollama-integration/

#photoprism #opensource

Migrating Photoprism From One Machine to Another

Reading Time: 4 minutes

Due to the Raspberry Pi 5, and older, having issues with heat throttling and more it makes sense to build a Photoprism on a "normal" laptop before migrating towards the Pi. The process is an interesting one.

Photo Consolidation

The First step is to consolidate your photos from Google Photos, Apple Photos, Flick and any other source you might have. The simplest method is to organise them chronologically, and then to spend time removing as many duplicates as possible. There are tools for that. They will help you add exif data back into the photo exif fields, as well as look for duplicates.

Photo Ingestion

If your library is organised and ready there are three folders that are interesting to us.

  • ./photoprism/originals:/photoprism/originals
  • ./photoprism/import:/photoprism/import
  • ./photoprism/storage:/photoprism/storage

Originals

The originals folder is where you put your chronological library. Photoprism will automatically index all the files in this folder. If you have decades of photos this takes time. It applies machine learning to catalogue dates, locations, people, objects and more.

Storage

The storage folder is critical because this is where json, yaml and other metadata files. This is also where the thumbnails are stored and this is key for migrating from a high spec laptop to a limited spec Pi. When I migrated this folder I had 1.8 million files for 110,000 photos.

Import

The import folder works as a stepping stone. It allows you to import photos and videos at a later date. If you select "move" then it will import photos, and then move them out of the folder leaving it empty.

Phone Ingestion via Photosync

Photosync is a partially free app. If you pay once you can unlock more functionality. With Photosync you can setup photoprism syncing. You add a configuration title, for example "laptop" and then you select the destination folder, and make sure to use "create sub-directories yyyy/mm/dd to preserve the hierarchy you spent hours preparing earlier.

Be careful, because usually the default is Device Name + Album Name. There are other options but they are out of scope for now.

The advantage of Photosync is that once it is set up correctly, it will keep uploading photos according to the hierarchy you want, eliminating the need to do things manually.

Moving to the Pi 5

Once all the heavy lifting has been done, and the logs say that tasks are completed you can move everything to an NVMe card or external hard drive depending on budget, and convenience. With an NVMe card and a 400 gb library you still have head room. With a 2-6TB hard drive you have plenty of head room.

It's worth keeping in mind that the docker-compose file for an intel machine and a Pi are different so you will need to find a template for the Pi and ARM architecture.

I like to have a folder /apps/photoprism/ for the docker compose file on the SD card. I also have /apps/photoprism/database on the HD, that was prepared on the laptop, as well as /photoprism/photos, photoprism/storage and photoprism/import on the external hard drive. I prefer to keep the photos and DB separated.

The Thumbnail Mistake

When I migrated from the laptop to the Pi I didn't bother to copy the storage folder because I thought "the thumbnails won't take too long to generate". I quickly realised the error of my ways, shut down the Pi, rsynced the files, and then plugged the drive back in before rebooting the file. Within seconds Photoprism was happy.

If I had let the Pi regenerate all of the thumbnails and video files it would take hours, if not days and weeks.

The Vanished HEIC Files

According to the laptop library I have 119,000 photos but according to the Pi5 I only have 110,000. That's a huge difference. I noticed that the photos that were missing were those taken with a phone, specifically HEIC files. I tried uploading them as jpeg and they appeared almost instantly.

The quick solution is to upload the photos as jpeg rather than HEIC via Photosync. The slow solution, and this isn't a solution, since it makes the Pi unusable for hours is to reindex the library. I am attempting this now.

The Thumbnails Exist

the paradox, in this situation, is that Photoprism, on the intel based laptop already did all of the heavy lifting so Photoprism should just have seen that each photo had a thumbnail and reflected that in the index, instead of re-inventing the wheel.

And Finally

The Good

I was able to migrate my library from one computer to the other and it worked almost flawlessly except for the HEIC issue. If I had transferred jpeg images I would have had a flawless migration.

The Mistake

The mistake is to re-index the entire library, especially in light of the limitations of rRaspberry Pi devices.

The elegant Solution

The elegant solution is to revive an old phone with all these photos, make sure Photosync is installed and setup, and tell it to upload to the Pi5 as JPEG rather than HEIC. It takes more than five minutes, but it worked instantly with a few test photos.

Silver Lining

The Proof of Concept migration was a success. If I had moved between two intel devices it would have been flawless. It's because I migrated from one architecture to another that I hit a little snag. I simply noticed it while writing the blog post.

#laptop #photoprism #photos #pi #Raspberry #selfHosting

Frustrated by Google Photos' privacy trade-offs? 📸

PhotoPrism brings AI-powered organization—like face recognition and world maps—to your collection without the Big Tech tracking.

On PikaPods, you get the full experience with all Sponsor features included, starting at $6.20/mo. Keep your memories, lose the surveillance.

https://pikapods.com/pods?run=photoprism #PhotoPrism #Privacy #OpenSource

PhotoPrism troubleshooting tip:

If PhotoPrism crashes during photo import/indexing, it's almost certainly an out-of-memory kill.

PhotoPrism requires minimum 4GB of swap space. Without it, the Linux OOM killer terminates the process during TensorFlow face detection.

Fix:
sudo fallocate -l 4G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

Add to /etc/fstab for persistence.

#selfhosted #photoprism #docker #linux #homelab #troubleshooting #foss

On Familiar Faces and Forgotten Names

Reading Time: 2 minutes

It's amusing. I look through Immich and Photoprism and I am struck by how many names I have forgotten, but how easily I remember certain faces. I'm also curious to see how I remember certain names after scrolling through faces yet to be identified.

Decades ago when I was playing with iPhoto and Picasa I knew all these people well, and I saw them daily. It was easy to match faces to names. Now it's a challenge, because I haven't seen these people in years, if not decades.

For a while iPhoto and Picasa were happy to encourage us to add names to faces, until people worried about privacy, and the need to deactivate this functionality. It's about privacy.

Now that we are moving back to to an era of locally installed apps, or self-hosting, if we want to use a trendy term, we can allow facial recognition to know which face belongs with which name. The challenge is to remember the name.

If Facebook was still the network of uni friends and their friends, and our families, and friends of theirs, then it would be easy to remember who A, B and C are.

Instead I rely on patience, and triggers. I might not recognise a face for days or weeks, and eventually I remember because by remembering name A and Context C I find the name from that memory subset.

On Immich I have identified plenty of people, but I have 11,000 faces in total, so remembering all those names is highly unlikely. On Photoprism I have recognised 80 faces out of 999+ faces. Each time I recognise a dozen I end up with 999 more faces to recognise.

In some cases I met the people two or three times, so to forget their name is normal. In other cases I spent years seeing these people daily so I should remember their names with ease.

The forgotten names don't really matter. The value of Photoprism, and Immich facial recognition is consolidating my name recollection for people that are part of my current life, whether it is people I cycle, hike, discuss books with or more. It would be nice to recognise people by their names, as well as their faces.

Photoprism V Immich

Immich is head and shoulders above Photoprism when it comes to browsing and naming faces. With Immich you can regonise a face, name it, double check, and return to naming other people within a second or two. With Photoprism it's slow and sluggish. For context Photoprism is slow and suggish on an HP455(need to double check) whereas immich on a Raspberry Pi 5 with 8 gigs of ram is fluid.

And Finally

When Immich and Photoprism instances crash and I need to repopulate a library most of the work is quick and automatic. I remember the name of via ferrata, and places. It's adding a name to a face that I find difficult. That's the part that takes time.

And finally, I do get a sense of accomplishment with recognising a face after days or weeks of trying to remember. Why do we forget names of familiar faces?

#facialRecognition #immich #machineLearning #photoprism

On PhotoSync and Photo Uploader for Photoprism

Reading Time: 2 minutes

Photosync is a photo uploading app that allows you to upload to various devices and cloud solutions with ease. Photo Uploader for Photoprism is a specialist app for Photoprism. The reason for which I bring up both of these apps is that they allow you to sync to photoprism.

When I was testing uploads with Photosync I noticed that I can upload to the Photoprism library directly but that it creates a "current_phone" folder, and adds photos within this folder. Since my originals folder is meant to be clean and chronological this breaks the flow I want.

That's why I tested Photo Uploader for Photoprism. This is a specialist app. It's designed to upload to Photoprism and nothing else. I configured it correctly, went out for a walk and uploaded a few pictures and they appeared right where I expected and wanted them to appear.

Tidy with Photo Uploader

If I use a Fairphone 4, and an iPhone Se, and an Iphone 14 I'd end up with three or four camera folders, each filed with photos. It would require manual maintenance to ensure that the library remains organised chronologically.

With a chronological library photos appear by year, month, day, so whether you upload from camera A or B, or C, they all get organised into the same hierarchy reducing the risk of duplicates, triplicates or worse. It also makes it possible to share your library between apps.

Imagine that you have a photos directory that Photoprism can feed and Immich and Nextcloud can see. If Immich and Nextcloud can see the same directory as Photoprism, then photoprism is the primary interface and Immich and Nextcloud mirror it.

In this way you have a single library to backup and maintain and the others mirror. With rsync you can mirror this drive to a secondary drive for a backup, and you can also get a service like kdrive, iCloud, Google Drive or other to watch and mirror changes either automatically or via cron jobs.

Versatile with PhotoSync

Photosync offeers WebDav, Google Drive, Google Photos, Flickr, Photoprism and more. If you want to you can set it up to sync automatically to the service of choice, and you can tell it to delete photos once they're backed up. I never use the delete after backup option. I prefer to have files mirrored in at least one other place before deleting files.

Date Based Folders

While looking through I noticed that we have the Date Based Formats option, and the option of choosing recording date (year + month + day). I took a test photo this morning and it was automatically sorted into 2026/02/18/filename. By keeping the desired folder hierarchy I have little to no tidying up to do.

And Finally

Photosync and Photo Uploader for Photoprism do almost the same thing. Photosync feels more polished and stable. By allowing me to keep the folder hierarchy I want I can then feed Immich and Nextcloud with the shared folder without worrying about duplicates and triplicates.

If there is a breaking change for Photoprism, or Immich, and I need to start from scratch, my folder hierarchy will simplify the task, thanks in part to how Photosync or Photo Uploader for Photoprism, helped me keep my photos organised.

#apps #iOS #mobile #photoprism #photosync #upload

Sorting Photoprism Photos With the Mistral Cat

Reading Time: 3 minutes

I chose to experiment with Le Chat by Mistral, the French AI alternative to Gemini, Claude and CatIFARTED (ChatGPT). For the experiment I copied my Photoprism photos from the drive I use that is connected a Raspberry pi to a laptop before running scripts to sort and remove duplicates. It worked well, with a nice little bonus which I'll expand on later.

Goal: Clean Up Duplicate Photos

My objective was to Remove duplicate photos from a large collection while keeping the best version of each file. I Used jdupes to identify duplicates and a custom script to decide which files to keep.

The sources of duplication were that I imported photos from Google Takeout on one side, as well as from two or three iphones and an android phone. I suspect that Photosync might also contribute by encouraging the creation of a folder per device that we import from.

Custom Rules for Keeping Files

After running jdupe I set up custom rules that looked at file Naming: I told it to Prefer IMG_ or VIRB over hash-named files. iPhones, Android phones and photo cameras never, or rarely, use names that are hashes. These are usually created by Whatsapp and similar apps.
I chose to apply directory priority to Keep files in human-readable directories (e.g., "Spain bike ride") over generic ones (e.g., "Photos from 2018"). Google Takeout creates two or more folders. It creates a primary year folder with all photos from that year, as well as secondary event specific folders based either on the name we chose, for example 'spain bike ride' or date based, if we did not give a specific name.

In the final step I noticed that it seemed to be choosing to delete HEIC files, rather than .JPG/.JPEG files. As HEIC are usually the original I want to keep the original. Eventually I saw that we had duplicate HEIC files, in which case I allowed it to remove duplicates of this file type. Finally I noticed that video files were either kept as mov files or converted. I accepted to have a rule to choose.MOV over .MP4. I used Le Chat (The cat) to help me understand the output from jdupe runs.

Script Development and Testing

As we progressed through the project The Chat offered three types of automation. It suggested Digikam, Pillow, a bespoke python script or using exiftool. In several cases it Wrote a Python script to apply the rules and generate a list of files to delete based on the output from jdupe being run.

Testing and Iteration

Part of collaborating with AI tools is experimentation and iteration. It's about running a command, seeing the output, understand what you see, and then perfecting the command until you get what you want. It's also about seeing opportunities.

One of the scripts I got The Cat to run was to check the "to delete" list and check if they had exif data for the creation date. i.e. the date when photos were taken. When a script confirmed that this was the case, that's when the process of fine tuning the deletion script advanced. These are the rules we mentioned above.

Verification and Safety Checks

We ran a lot of dry runs. When you run dry-run jdupe it checks for duplicates and outputs to the terminal. When you have thousands of duplicates the terminal window forgets plenty of results. that's where writing to a text file helps. It's persistent.

The beauty of these text files is that they're light, and you can share them with The Cat and the cat, in some situations, will actually run the script you discussed with it, rather than outputting the python script. This differs from Gemini in two ways. First it runs the script, so you don't have to, but secondly if it reaches the token limit for script execution then it gives you the python script to run locally.

What is especially nice is that you can still keep "chatting" even if you reach that limit. It just won't run scripts internally.

Backup: Emphasized

Along the way The Cat constantly encourages you to make sure you have a backup before running a command. As I was working from a copy, rather than the primary library I felt safe to experiment. Eventually I did execute the command to delete the duplicates and ran jdupe one last time to ensure the duplicates were gone.

And Finally

While experimenting I hit the limitations of the free plan, first for code execution, and then for chat. I didn't intend for it to run scripts on files I uploaded. I think running scripts locally makes more sense. I uploaded the data for The Cat to get a better understanding of the data.

Hitting the data limit is a feature. It encourages us to take a break and work on something else.

What surprised me, yesterday, but again today, is that I get fatigued from playing with AI, because although large language models do some of the thinking, you still need to babysit them, and understand and supervise what they're doing.

#AI #cat #jdupe #machineLearning #mistral #photoprism

@pixeldorf
Schön, dass Du's wieder am Laufen hast... ja, NC kann schon a bissl rumzicken.
Vor allem vor nem Update gibts immer nen Snapshot. 😜

Hab hier gelesen, dass ihr alle #Immich nutzt... sehr cool!
Da trau ich mich als #Photoprism User ja garnix mehr sagen. 😆

@sindum I started my #Photoprism server in the #Proxmox Homelab Project last week. After some experiments with the configuration in the first round, I indexed about 200,000 images. Currently more than 200,000 images are running. Many of them are RAW files. Grouping by location, faces, and calendar works just fine. The web app runs pleasantly stable on Linux, Windows, Andoid and iOS. The files and thumbnails are stored on a NAS; the database runs on an internal SSD. #DiDay