Across most of my blog content, one user category stands out: "Source China"
Do other tech bloggers see similar, unusual activity?
Across most of my blog content, one user category stands out: "Source China"
Do other tech bloggers see similar, unusual activity?
New Post on koolaid.info: PostgreSQL 16 on Ubuntu 24.04 LTS for VB365 v8
#VeeamVanguard
https://koolaid.info/postgresql-16-on-ubuntu-24-04-lts-for-vb365-v8/
New Post on koolaid.info: Prepare Ubuntu 22.04 for VB365 v8 Proxy use
#VeeamVanguard
https://koolaid.info/prepare-ubuntu-22-04-for-veeam-backup-for-m365-v8-proxy-use/
New post on koolaid.info! Automating Let's Encrypt Lifecycle with Posh-Acme and Cloudflare #VeeamVanguard
https://koolaid.info/automating-lets-encrypt-lifecycle-with-posh-acme-and-cloudflare/
New Blog Post: Maintaining Recoverability in Veeam Capacity Tier Migrations #VeeamVanguard
http://www.koolaid.info/maintaining-recoverability-in-veeam-capacity-tier-migrations/
Veeam Backup and Replication Administrators often want to be able to transition cloud object storage providers in today’s era of backup systems management. This can be due to several factors including, cost, availability, geo-location, and cloud preference. Historically the way to address object storage in VBR is through a Scale Out Backup Repository, or SOBR, where you have one or more block based primary repositories on-premises in the performance tier and then a single object storage bucket with a Cloud Service Provider (CSP) in the capacity tier. The capacity tier can be used in one of two ways: As you can see in the screenshot below you can also choose both options to allow for both offload of storage and 3-2-1 level data protection. Since version 12 VBR now allows for multiple buckets in the capacity tier, but they must all be of the same time (object or block) and in the case of object of the same Cloud Service Provider type (Amazon S3, Azure Blob, S3 Compatible, etc.) Changing to a new Cloud Service Provider Now that we’ve covered off what Capacity Tier is but let’s discuss how you can change the CSP who provides it since you can’t mix them in the same SOBR tier. If you can handle the bandwidth moving is quite simple. First go and disable the capacity all together and allow the wizard to finish processing. Next add your new repository or repositories to as backup repositories and then in the Capacity Tier configuration screen as shown above click the “Choose…” button for repository capacity, remove the existing repository and add the new ones. Once you complete the wizard you will be prompted to choose if you’d like to move only the latest set of restore points or move all that are present in the performance tier repositories. Veeam Backup & Replication will then immediately begin offloading your choice of restore points to the capacity tier and copies will begin with the next run. Migrating Extended Backup Chains As you may be thinking this sounds a bit too simple, what about all those restore points you’ve got in your old CSP? You have a need to retain those, as most do, you will now need to strategize how you will move your retained backups from object storage system to another. Unfortunately, there is no silver bullet to be used here there are some definite guide posts to lean on. The first option I would look to here is if you are moving into an object storage platform that has its own data migration service or capability, I recommend that you use that. For S3 this would be AWS DataSync and it is best for moving into S3, especially from other hyperscale level object storage platforms. DataSync is great as it is easily configurable and highly performant while being well support by many vendors for migrations. My second choice for doing this background migration would be the rclone utility. Rclone is an open-source project that allow for data migration across a wide variety of storage types but well supports object to object migrations via either sync, copy, or move operations. This is an excellent choice if you are more familiar with traditional robocopy type tools or are comfortable with being more hands on with the migration. The only drawbacks to rclone are that it can be a bit of trial and error to find an optimized set of settings for migrating between systems and there are some vendor systems that do not fully support it as a migration tool. Making Retired Capacity Tier Restore Points Visible Now we reach the meat of our post. Veeam does not allow you to directly view capacity tier data either outside of the capacity tier or on the source VBR server at all. This is mostly due to overwhelming concern for data safety but in any case, it is a limitation. To be able to expose restore points that were in your old capacity before migrating to the new location you can follow these steps. Your backups are now available for any normal restore or interaction purposes. Understand that unless immutability is set on your backups it is possible to interact with this data, so if you are planning to migrate the data you should tread lightly. I would also either destroy this temporary landing spot or put your capacity tier repositories into maintenance mode before proceeding with your migration.
New Post on koolaid.info! #VeeamVanguard
https://www.koolaid.info/veeam-backup-for-microsoft365-v8-release-thoughts/
Today Veeam released the long awaited version 8 of their Microsoft365 Backup product commonly referred to as VB365. I’ve had the opportunity to be somewhat involved in some of the new capabilities of this release and I’m excited to see it released. This version will not only have the normal complement of new features and capabilities but also some significant architectural changes that should make it more usable for the large enterprise and service provider customer bases. In this post I’m going to go through the What’s New and Release Notes documents and provide a bit of insight about some of what it particularly impactful to me. As also you should keep the user guide handy as well but we’ll focus on the shiny for now. First and foremost I think of this release as a scalability release and I believe Veeam does as well. While there are new data protection capabilities the focus is making it more performant for more customer personas. At the top of the ways they are doing that is through the concept of proxy pools. Before each job you created mapped to a single repository which then mapped to a single proxy/worker machine. These proxies are compute wise “expensive,” typically 8 vCPU, 32 GB of RAM and each maintained a metadata cache in a jetDB. This could be problematic in a number of ways; in sizing knowing how many “objects” you have assigned to a given proxy, in processing if you have a noisy neighbor it can make the proxy server less performant for other workloads, etc. With proxy pools the concept is that each repository (object storage only) is still related to a job but it’s proxy relationship is now to a pool of proxies allow it to be more ephemeral and only make a relationship to a given proxy at the time of job kick off, resulting in better performance and scalability. VB365 itself will look at the pool and decided which proxy to put a job run onto based on performance at the time. If you find you’ve over assigned a given proxy pool you can simply add more proxies to the pool up to the 150 maximum mentioned in the What’s New. Considering the Veeam best practice is up to 4,000 objects per proxy (i personally prefer to stay in the 2,000 range to allow for the unknown) that’s 300,000- 600,000 per proxy pool! Another consideration in regards to proxy pools and performance is that you can have multiple proxy pools per VB365 server. Those of us have known for quite a while that Exchange based workloads (Mail) and Sharepoint based workloads (Sharepoint, OneDrive for Business, Teams) perform differently so for those operating at scale you may want to consider having 2 separate pools and dividing between them based on workload type. You know how I mentioned above that you in previous versions VB365 maintained a JetDB with the metadata cache on each proxy for the repositories it manages? Well in order to make that more transient it requires large scale architectural changes that should pay dividends moving forward. First off all of that metadata for the repositories but also the configuration DB is being moved to a PostgreSQL instance. Much like how Veeam Backup & Replication the default is to install Postgres right onto the controller server it is also possible to build it on another VM/elsewhere (on Linux, in a cloud DB service, etc.) and then link the controller to it during upgrade/install. There is also a powershell cmdlet (set-VBOPSQLDatabaseServerLimits) that will right size the Postgres settings for the size virtual machine you are running it on. I’ve written on how to do external Postgres for Veeam before and that’s still very much so relevant to this. The second change needed to make proxy pooling work is to implement some form of a message queue. There are many of these out there, AWS SNS, Azure Service Bus, RabbitMQ all come to mind but Veeam has chosen to go with the open source NATS.io project for it’s needs. NATS will essentially allow Veeam to treat each M365 item (think email message or document level) as a task to be handled independently to allow for fault tolerance of the proxies in the pool and to handle the management of those tasks. Again, by default the Windows version of NATS will be installed on the controller VB365 system but you can also pre-build this as a standalone external system or even a cluster depending on the size of the environment. Even as we get into the “other” features we have soem real bangers to lead off. I’m going to rearrange their order a bit because to me the Linux backup proxies is massive. As you can have 10s if not 100s of proxies in a given environment that is a lot of cost uplift to put Windows licensing on to every single one of those proxies. Linux based proxies also are typically also more accepting of IaC or DevOps methodologies allowing for better uptime and management. One thing I’ll call out from other notes is that the Linux proxy is reliant on the Microsoft feed of the dotnet runtime as opposed to the version that’s available in the Ubuntu package manager. Microsoft themselves have depreciated this capability so at this time you cannot setup a Linux Proxy on Ubuntu 24.04. Prior to this release in the modern API era of Teams backup support Veeam has only protected the Channel based chats. With this release they are adding support for shared and private teams chats as well. Be mindful though that because they used the metered Teams Export Graph APIs for there will be in some cases significant costs related to protecting this data. I’m not saying you shouldn’t do it, but just be aware. Finally Veeam has got around to simplifying the process to adding Object Storage as a repository to VB365. It used to be you had one workflow to...
[📜] #Veeam Data Cloud Microsoft 365: Remove Deleted Sites from Backup Jobs
In this blog post I show you how Veeam Data Cloud alerts you to deleted #SharePoint or #Teams sites, and how to resolve these warnings if it's intentional.
[📜] World Backup Day 2024: Being SMART and Predicting Hardware Failures
All devices have a limited lifespan, but we don't need to wait for them to break to know it's time to replacement, as I discuss in this #WorldBackupDay post!
http://micoolpaul.com/2024/03/22/world-backup-day-being-smart-and-predicting-hardware-failures/
[📜] World Backup Day 2024: SLAs & Exceeding Them
Why do we invest so much redundancy in infrastructure, then let backups, our last resort for data resilience, have minimal room for error? Thought piece in new post!
https://micoolpaul.com/2024/03/21/world-backup-day-2024-slas-exceeding-them/