@FrVaBe FWIW I'm not a timescale specialist, but I know a thing or two about Postgres ...
I think this very much depends on what you mean by a "long running job". Is that one long transaction? That might indeed be a problem, as it will block vacuum and might cause bloat.
But maybe that's OK. Maybe your application does not generate many dead tuples for this to matter. Or maybe it's worth it, if the following runs will be faster.
@FrVaBe If it's not one long transaction for the whole process, this shouldn't be a problem from the DB point of view - it won't block any cleanup or cause similar issues. It's essentially a sequence of small transactions, the database is designed to handle that just fine.
Of course, I don't know if it's worth it - but I guess if the gains were not significant, you won't ask the question. I'm just saying the database should handle this OK.
@FrVaBe Please don't use select/insert/delete on TimescaleDB tables.
The data is sorted in table partitions hence there's usually no advantage in moving data around manually.
The size of these partitions, which are called hypertables chunks, can be changed with set_chunk_time_interval.
If that doesn't fit your needs maybe PostgreSQL partitioning might solve problems as you can detach a partition, which in fact is just a table. Detaching is fast and doesn't block ongoing processes.
But it all depends on the use case and the goal, that's to achieve.
@sjstoelting It is about a daily job that will iterate over about 360 hypertables and move chunks with older data to another partition like described here:
https://docs.timescale.com/use-timescale/latest/user-defined-actions/example-tiered-storage/
The first execution will be the crucial one because a lot of chunks have to be moved and this will last hours.
Alternative I tried to run a dedicated job for each hypertable but I ran into not enough worker (?) threads problems.