We use a similar approach.
Fun fact: A query like this will, once in a blue moon, return more than limit (here 1) row, since the inner query is executed multiple times and returns different ids, which is surprising for a lot of people. If your code does not expect that, it may cause problems. (The article seems to, since it uses a list and iteration to handle the result.)
delete from task
where task_id in
( select task_id
from task
order by random() -- use tablesample for better performance
for update
skip locked
limit 1
)
returning task_id, task_type, params::jsonb as params
You can avoid that by using a materialized Common Table Expression.
https://stackoverflow.com/questions/73966670/select-for-upda...Also, if your tasks take a long time, it will create long-running transactions, which may cause dead tuple problems. If you need to avoid that, you can mark the task as "running" in a short-lived transaction and delete it in another. It becomes more complicated then, since you need to handle the case that your application dies while it has taken a task.