An Admin’s Guide to Fixing PeerTube

Fixing the hard stuff as a PeerTube admin.

PeerTube is an amazing platform for video, but it sometimes needs tender love and care. As with any system, problems are going to occasionally crop up. Some of the fixes are somewhat obscure, and require doing a lot of research to fix.

Here’s a brief overview of how I deal with video issues as a sysadmin. This guide assumes that you’re running PeerTube on Ubuntu without a Docker container, but the concepts should still translate over, regardless of your setup. We can’t provide you with a guide that goes over anything and everything that comes up with self-hosting, but we can point you in the right direction on how to solve them.

The Peertube Flow

To understand how to deal with video issues in PeerTube, it’s important to first understand how videos are successfully created and published in the system. Here’s a very simple breakdown of the process:

  1. Ingress – a user uploads a video, either directly from their filesystem, or through the built-in yt-dlp utility for importing remote videos hosted elsewhere.
  2. Record Creation – data representing the video entry is added to the Postgres database in the  video  table. This includes the initial metadata about the video’s catalogue entry, along with values representing the video’s current state.
  3. Queuing – a number of jobs are kicked up in the system queue, each one performing operations on the video. These jobs include Transcoding, Moving to Object Storage, and Federation. Each job updates the video state, signaling to the system that it’s time for the next job to begin.
  4. Transcoding – the source video is converted into a format that works with PeerTube’s p2p player, and produces several different resolutions. There are two supported formats: Web Video, and HLS. Most PeerTube instances use HLS, but it is possible for an instance to have both. Just keep in mind: using both will double a server’s storage requirements, and having multiple resolutions will increase your requirements further. If you’re rendering a 1080p and 360p version of the video, you’ll ultimately create four versions of the same video if you use both Web Video and HLS.
  5. Object Storage – if you have Object Storage set up, this job also gets queued. Basically, it’s a transfer job to move the file off of your server, and into your bucket hosted elsewhere.
  6. Federation – once the video is stored, the data is once again updated in Postgres to reflect the final video state, and the video is dispatched to ActivityPub subscribers.

When it comes to issues with video, most breakage happens between steps 1 to 5. The most common culprits are failure to transcode or move video files, usually because the video file is too big, or there might be problems with file permissions in a given directory.

Triaging Issues

Before doing anything, it’s important to try to gather as much information as possible about what’s going on. These are the most common steps I take when checking instance issues.

Check Service Status

If you’re using Ubuntu or an Ubuntu derivative as your server OS, chances are that you’re running systemd. It comes with a number of tools to check the status on various services, and also lets you start, stop, or reload them. It also ships with some tools to read logs and see what’s happening.

The best starting point is to first look and see if everything you need is running. Typically, this can be summed up in about 3 commends.

For PeerTube itself:

sudo journalctl -u peertube.service

For Redis:

sudo journalctl -u peertube.service | grep "error"

For Postgres:

sudo journalctl -feu peertube.service | grep "error"

Sometimes, the names of these services vary, or you might have additional named services. For example, it’s not uncommon to have multiple versioned postgresql.service  jobs that say things like postgresql@16-main.service. Make sure that everything is running.

Track Jobs in Real-Time with Journalctl

We can use journalctl to look at what’s happening at the service level.

sudo journalctl -u peertube.service

Because ActivityPub servers are pretty verbose, and list a lot of things going on, we can narrow things down to specific patterns, like error messages.

sudo journalctl -u peertube.service | grep "error"

If we want to get real fancy here, we can watch specific messages happen in real-time. You can substitute  “error”  with whatever string you want to parse.

sudo journalctl -feu peertube.service | grep "error"

Check Logfiles

If, for whatever reason, journalctl is unavailable to you, or otherwise unhelpful, try checking your logs. By default, PeerTube 6.1.0 stores its logs in storage/logs of your PeerTube directory. Usually, you’ll have two logs present: peertube-audit.log and peertube.log. The former seems to reflect actions taken by admins, the latter seems to just be a general catch-all.

If you just want to watch stuff happen in real-time, you can use tail -f to watch that file and post changes to the console.

tail -f /var/www/peertube/storage/logs/peertube.log

Error Monitoring with Sentry

One great way to keep tabs on server failures is to use an error monitoring tool, such as Sentry. You can sign up to their site for a free plan, and make use of the peertube-plugin-sentry package to hook up error monitoring for your instance.

The integration is relatively simple, but it provides a comprehensive overview of errors, and includes breadcrumbs, headers, and other contexts derived from a specific issue.

It’s also possible to set up alerting services through Webhooks or direct integrations with things like Slack or Discord. I haven’t experimented too much with this, but you might be able to use the Webhooks integration with a Matrix bot specifically designed for Webhooks.

Check Disk Space

One of the biggest reasons for a PeerTube server to randomly start failing is because it’s critically low on free disk space. Typically, this can happen for one of two reasons:

  • Failed Transcoding Jobs – video transcoding never succeeded, so a file was kept locally on your server. This can stack up over time.
  • Database too Big – this is less common with PeerTube, but can absolutely become an issue if you cache everything from remote servers and never prune storage. I learned this the hard way when self-hosting Pleroma:  I never deleted any content, local or remote, and ended up with five years of my server’s network snapshot clogging up Postgres.

To examine file usage on your directories, try using the du command like so:

du -d 1 -h /var/www/peertube/ | sort -h

This will print the first-level directories in /var/www/peertube with their respective human-readable file sizes, and then sort them by size. For me, it currently looks like this:

8.0K    /var/www/peertube/node_modules
8.0K    /var/www/peertube/.yarn
16K     /var/www/peertube/.local
88K     /var/www/peertube/config
48M     /var/www/peertube/.npm
563M    /var/www/peertube/.nvm
1.3G    /var/www/peertube/versions
2.4G    /var/www/peertube/.cache
75G     /var/www/peertube/storage
79G     /var/www/peertube/

We can dig into the storage directory to find out what the biggest culprits are for file storage. We’ll expand our parameters to dig two levels deep, instead of showing just the surface level.

du -d 2 -h /var/www/peertube/storage/ | sort -h

Let’s look at the output:

4.0K    /var/www/peertube/storage/cache/torrents
4.0K    /var/www/peertube/storage/cache/video-captions
4.0K    /var/www/peertube/storage/client-overrides
4.0K    /var/www/peertube/storage/original-video-files
4.0K    /var/www/peertube/storage/redundancy
4.0K    /var/www/peertube/storage/tmp-persistent
4.0K    /var/www/peertube/storage/tmp/resumable-uploads
4.0K    /var/www/peertube/storage/web-videos/private
4.0K    /var/www/peertube/storage/well-known
352K    /var/www/peertube/storage/cache/storyboards
592K    /var/www/peertube/storage/cache/previews
956K    /var/www/peertube/storage/cache
5.9M    /var/www/peertube/storage/captions
6.1M    /var/www/peertube/storage/logs
25M     /var/www/peertube/storage/plugins/node_modules
61M     /var/www/peertube/storage/plugins/data
86M     /var/www/peertube/storage/plugins
114M    /var/www/peertube/storage/bin
150M    /var/www/peertube/storage/torrents
267M    /var/www/peertube/storage/previews
273M    /var/www/peertube/storage/avatars
766M    /var/www/peertube/storage/thumbnails
1.1G    /var/www/peertube/storage/storyboards
1.5G    /var/www/peertube/storage/tmp/hls
3.5G    /var/www/peertube/storage/tmp
9.8G    /var/www/peertube/storage/web-videos
59G     /var/www/peertube/storage/streaming-playlists
59G     /var/www/peertube/storage/streaming-playlists/hls
75G     /var/www/peertube/storage

It looks like our main culprit here lives in streaming-playlists/hls, which indicates a bunch of files that failed to migrate over to cloud storage. Evidently, I have work to do.

Freeing up disk space is important, but you’re going to want to do things very carefully. Ideally, you’re going to want to free up just enough space to get PeerTube’s utility scripts running. Those tools will help get you the rest of the way.

Cleaning up the Database

While we’re at it, let’s take a look at how big our Postgres database is. Instead of switching users and messing around in psql yourself, here’s an easy command that you can run.

sudo -u postgres psql -c "SELECT pg_size_pretty(pg_database_size('peertube_prod'));"

In my case, the output ends up looking like this:

pg_size_pretty 
----------------
1055 MB
(1 row)

For a PeerTube instance that’s nearly four years old, this is very manageable. However, if you needed to clean up your database, you can run VACUUM in psql. Although more recent releases of Postgres can support running this operation while PeerTube is running, it’s probably best to stop the service and focus on the database.

sudo systemctl stop peertube.service
sudo -u postgres psql -c "VACUUM peertube_prod;"

In a nutshell, VACUUM reclaims some space from old database tables, indexes, and tuples that are no longer in use. The Postgres project offers some solid documentation on the different ways you can use this feature, from a casual reclaiming of space to intensive operations that lock the table during maintenance.

Cleaning up Loose Files

Dealing with a bunch of extra video files is kind of a headache. On the one hand, they’re right there on your server for processing, and you can go through and attempt to fix your broken videos. On the other hand, there’s sometimes a lot of other random things that don’t belong in there, and they can take up space.

One approach to handling this is to simply copy the files, and put them somewhere else, to be worked through at a later time. If you’re handy with rsync, you can copy the storage directory onto another computer. Here’s an example on how to sync it locally from another computer, using SSH.

rsync -P --rsh=ssh userid@remotehost.com:/var/www/peertube/storage ./peertube-storage

Now that we have a copy stowed away, we can run the prune-storage PeerTube script. It will ask you for a confirmation prior to running.

cd /var/www/peertube/peertube-latest
sudo -u peertube NODE_CONFIG_DIR=/var/www/peertube/config NODE_ENV=production npm run prune-storage

In the upcoming release of PeerTube 6.2.0, it will also be possible to purge remote files, and re-fetch them when requested.

cd /var/www/peertube/peertube-latest
sudo -u peertube NODE_CONFIG_DIR=/var/www/peertube/config NODE_ENV=production npm run house-keeping -- --delete-remote-files

Fixing Broken Videos

You’ve put your ear to the ground, listened to some strange rumblings on your server, and diagnosed a problem. Assuming you dealt with the main problem, you’re probably asking yourself: now what? Let’s go through the steps of fixing a broken PeerTube video.

Find the Affected Video

PeerTube’s admin UI is kind of limited. There’s some helpful labels that tell you things, but filtering to get what you want is somewhat harder. The Video Overviews page for admins can be found at https://yourpeertube.site/admin/videos/list.

When I’m trying to find videos that failed transcoding, I’ll usually start with this filter first:

isLive:false hls:false isLocal:true

This checks for local videos that aren’t live streams, which haven’t successfully transcoded the HLS format that PeerTube uses. Note that this assumes you’re using HLS as your preferred format, instead of Web Video.

For some reason, this has the incorrect label. But, it failed transcoding.

For videos that fail to move to Object Storage, this one’s trickier: usually, transcoding was successful, but your integration wasn’t able to kick the files over. Your best bet is to use the same filter as before, but set  hls:true  instead. You’ll see yellow badges in your index for videos that failed to move over.

isLive:false hls:true isLocal:true

Get the Video UUID

Video IDs in PeerTube are a bit weird, in that a video actually has multiple IDs for different purposes. Let’s take this video as an example: https://spectra.video/w/upx8P1aadvRMaiHEu2Tnbg

The Object ID used for ActivityPub and page routing is: upx8P1aadvRMaiHEu2Tnbg. That’s fine and dandy, but we need the exact UUID for the video. Internally, the value for this specific video is: e6092d36-1b3c-4b3f-b994-06c1ddd4f9cf.

The question is, how do you get that value?

One quick and easy way to do it is to visit the video, click the 3 dots, and select “Download”. The ID you need is in the video name.

In the case of our example video, the URL for the file is https://spectra.video/download/streaming-playlists/hls/videos/e6092d36-1b3c-4b3f-b994-06c1ddd4f9cf-1080-fragmented.mp4. Because of the way PeerTube organizes files, the video ID will always be the 5-part sequence of numbers in the associated URL. So, the ID is e6092d36-1b3c-4b3f-b994-06c1ddd4f9cf.

Attempt File Recovery

Depending on your situation, you might not be able to accomplish this. Every entry in PeerTube’s video catalogue will have file links associated with them, which you can see in the admin interface. However, that link might just end up pointing to a dead link.

However: if you know for a fact that the file lives in Object Storage somewhere, you can use the ID we learned in the last step to your advantage. Try to study the download URL to figure out what file path the video might live in.

Because this video was uploaded after the migration to new infrastructure, the file in question ended up living in a different directory in my bucket than expected:  /hls/e6092d36-1b3c-4b3f-b994-06c1ddd4f9cf was the final location.

Check the Job Queue

The job queue in PeerTube is your best friend. It lives in your admin dashboard at: https://yourpeertube.site/admin/system/jobs

You’ll notice that there’s about 20 different job types to filter from. Typically, the ones you’ll most usually want to check are the following:

  • Video Import
  • Video Transcoding
  • Move to File Storage
  • Move to Object Storage

How to Handle Stuck Jobs

Every now and then, you’ll get a job that fails to complete. For example: transcoding jobs. Usually, you can just start another job and try again…but sometimes, old jobs can linger, and prevent you from moving forward. Maybe you’ve run multiple transcoding jobs, and the failed ones are preventing you from moving a video to Object Storage.

It’s handy to know that PeerTube uses a process queuing system called BullMQ to directly push jobs to Redis. We can manipulate Redis directly by using redis-cli. Type it into your terminal, and you’ll be taken to the Redis command shell.

Let’s take this failed transcoding job, and remove it from the server. The Job ID in the Local Jobs UI is cea7def6-bb22-4030-a77a-4ecdcfdb09ea 

We can find the corresponding job in Redis using the keys command:

keys "bull-spectra.video:video-transcoding:cea7def6-bb22-4030-a77a-4ecdcfdb09ea"

He’s the output, showing a matching result:

1) "bull-spectra.video:video-transcoding:cea7def6-bb22-4030-a77a-4ecdcfdb09ea

Now, all we have to do is delete the job:

DEL keys "bull-spectra.video:video-transcoding:cea7def6-bb22-4030-a77a-4ecdcfdb09ea"

The job has now disappeared from PeerTube.

Manually Alter the Video State

In some rare cases, your video might be stuck in a state, due to a job failing to complete. PeerTube might complain that a video is already being transcoded, when that process actually failed.

You can reach into the Postgres database directly to reset the video’s state, like so:

sudo -u postgres psql -c "UPDATE video SET state = 1 WHERE uuid = 'effda647-9f22-4ea7-bb4f-74f6b53bd2e5';"

Upload Video to Filesystem

Assuming that you have a copy of the video somewhere, now you’ll need to get it into your server somehow. My recommendation is to use SFTP in a file manager, but you can just as easily use something like wget from within the server session to fetch it remotely.

I like to put these videos in the storage/tmp/ folder of my PeerTube server. It’s easy to remember, and the file path is relatively simple.

Trigger Manual Import

A really important thing for admins to know: PeerTube’s server scripts are your friend. These scripts interact with the server directly, and are capable of kicking off a number of jobs that can’t be done anywhere else.

We’ll run this job to import the video file, and associate it with our problem child video. The parameters for the script are relatively simple: create-import-video-file-job -- -v [your-video-uuid] -i /path/to/your/file.mp4

cd /var/www/peertube/peertube-latest
sudo -u peertube NODE_CONFIG_DIR=/var/www/peertube/config NODE_ENV=production npm run create-import-video-file-job -- -v e6092d36-1b3c-4b3f-b994-06c1ddd4f9cf -i /var/www/peertube/storage/tmp/sean-sc-video.mp4

In some cases, you might have to manually switch to the peertube user account. In that case, run this instead:

su peertube
cd /var/www/peertube-latest
NODE_CONFIG_DIR=/var/www/peertube/config NODE_ENV=production npm run create-import-video-file-job -- -v e6092d36-1b3c-4b3f-b994-06c1ddd4f9cf -i /var/www/peertube/storage/tmp/sean-sc-video.mp4

The import happens pretty much immediately, and will replace previous file records for a video in the database. You can see this reflected in the Admin UI.

Trigger Video Transcoding

Once you’ve completed the import job, it’s time to manually run transcoding. You can do this in the admin UI, from the video overview.

This part usually takes the longest. A number of jobs will queue up, one for each video format and resolution option set up in your PeerTube config.

Move to Object Storage

We’re on the final stretch! Usually, this job will kick off on its own. If you have to do it manually, though, you can use the server script instead.

cd /var/www/peertube/peertube-latest
sudo -u peertube NODE_CONFIG_DIR=/var/www/peertube/config NODE_ENV=production npm run create-move-video-storage-job -- --to-object-storage -v [videoUUID]

For whatever reason, this job can be a little bit heavy on the server when dealing with larger files. You might find between page refreshes that an Nginx error page might show up, and that PeerTube will fail and restart. Don’t worry, just leave it alone and wait for the job to finish.

Assuming everything went off without a hitch, you’ll be able to see the video in a published state, with successful playback and peer-to-peer sharing functionality.


This concludes the steps I take as a PeerTube admin when dealing with system issues. It’s not pretty, but hopefully this will give other admins some ideas on how to triage the most common problems with videos.

Sean Tilley

Sean Tilley has been a part of the federated social web for over 15+ years, starting with his experiences with Identi.ca back in 2008. Sean was involved with the Diaspora project as a Community Manager from 2011 to 2013, and helped the project move to a self-governed model. Since then, Sean has continued to study, discuss, and document the evolution of the space and the new platforms that have risen within it.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button