Audiobookshelf crashes after a quick match and won't stay up
Fix the SequelizeUniqueConstraintError on bookSeries that crashes Audiobookshelf into a Docker restart loop after a quick match
If a quick match (manual or triggered by a third-party tool against the API) tries to add a series link that already exists in your database, Audiobookshelf throws an unhandled SequelizeUniqueConstraintError and the entire Node process dies. If you’re running in Docker with restart: unless-stopped (or the third-party tool keeps calling the API on reconnect), the same error fires again right after startup and you get stuck in a crash loop. The server is unreachable until you manually delete the offending row from SQLite.
The crash log looks like this:
FATAL: [Server] Unhandled rejection: Error
at async BookSeries.save (/app/node_modules/sequelize/lib/model.js:2490:35)
at async bookSeries.create (/app/node_modules/sequelize/lib/model.js:1362:12)
at async Scanner.quickMatchBookBuildUpdatePayload (/app/server/scanner/Scanner.js:318:30)
at async Scanner.quickMatchLibraryItem (/app/server/scanner/Scanner.js:76:35) {
name: 'SequelizeUniqueConstraintError',
errors: [
ValidationErrorItem {
message: 'bookId must be unique',
...
},
ValidationErrorItem {
message: 'seriesId must be unique',
...
}
],
parent: [Error: SQLITE_CONSTRAINT: UNIQUE constraint failed: bookSeries.bookId, bookSeries.seriesId]
The clue is the last line: UNIQUE constraint failed: bookSeries.bookId, bookSeries.seriesId. The bookSeries table has a unique constraint on the pair (bookId, seriesId), and quick match is trying to insert a pair that’s already there. It’s reported in #5215 and #4775.
Why it crashes the whole server
Look at the scanner code at server/scanner/Scanner.js:298-322. When a quick match wants to attach a series to a book, it checks the book’s in-memory series list:
const existingSeries = libraryItem.media.series.find(
(s) => s.name.toLowerCase() === seriesMatchItem.series.toLowerCase()
)
if (existingSeries) {
// update sequence only
} else {
// ...create the series if missing, then:
const bookSeries = await Database.bookSeriesModel.create({
seriesId: seriesItem.id,
bookId: libraryItem.media.id,
sequence: seriesMatchItem.sequence
})
}
If the in-memory libraryItem.media.series is stale (concurrent request, an API client that holds an older view, or a partially loaded item), the check passes and ABS tries to INSERT a bookSeries row that already exists. The database rejects it. Since the create call isn’t wrapped in a try/catch, the rejection bubbles up as an unhandled promise rejection and Node exits.
If Docker restarts the container, and whatever called the quick match (a script, a third-party tool reconnecting on a socket) calls it again, the same row is still missing from the in-memory state and the same INSERT runs. Crash again. Loop.
A common suspect is a third-party API client like ReadMeABook that fires quick match calls automatically. The maintainer pointed out in the issue thread that the stack trace always shows quickMatchBookBuildUpdatePayload, so the call path is quick match even when the user did not run one from the UI. The exact source of the API call (a script, a tab someone forgot they had open, a tool reconnecting) is not yet resolved upstream.
Fix it: delete the duplicate row from SQLite
Stop the server first, otherwise it’ll keep crashing while you work.
# Docker
docker stop audiobookshelf
# systemd
sudo systemctl stop audiobookshelf
Find the bookId and seriesId values in your crash log. They’re in the ValidationErrorItem blocks under value:. In the example above the book is c894f652-5983-4825-a23c-12e8ef4fc52a and the series is 4be27fd5-ae48-4e39-8085-aa2f8cba2890.
Open the SQLite database. The path is /config/absdatabase.sqlite inside the container, which on the host is wherever you mounted the config volume.
sqlite3 /path/to/your/config/absdatabase.sqlite
Check the duplicate is there:
SELECT * FROM bookSeries
WHERE bookId = 'PASTE_BOOK_ID_HERE'
AND seriesId = 'PASTE_SERIES_ID_HERE';
You should see one row. That’s the row quick match is trying to insert a second time. Delete it:
DELETE FROM bookSeries
WHERE bookId = 'PASTE_BOOK_ID_HERE'
AND seriesId = 'PASTE_SERIES_ID_HERE';
Exit sqlite (.exit) and start the server back up:
docker start audiobookshelf
The server should come up cleanly. If a third-party tool fires the same quick match again, the row will be re-created (this time legitimately, because the in-memory state will now be consistent with the DB).
Stop the crash loop from happening again
The database fix gets you back online, but it doesn’t stop the original race condition. A few things help:
Take a database backup first. Before you do anything else, copy absdatabase.sqlite somewhere safe. SQL changes against a live audiobook library can ruin your day if a typo creeps in. Audiobookshelf has a built-in backup feature under Settings, Backups; enable it if you haven’t.
Pause the third-party tool until your server is stable. If you’re running ReadMeABook, an Audible importer, or any other script that hits the ABS API, stop it before you restart. Re-enable it only after a clean boot. If the crash returns the moment that tool reconnects, you’ve confirmed the trigger.
Avoid running quick match twice on the same book in quick succession. From the web UI, quick match against the same book within a few seconds (especially with the same metadata provider) can hit the same race. Wait for the match to complete and the page to refresh before re-triggering.
Don’t store the SQLite database on a network share. The maintainer asks about this in almost every crash thread because SQLite + NFS/SMB is a known source of constraint and corruption errors. Keep /config on local storage (SSD or NVMe). The crash here happens on local storage too, but networked storage will give you more variants of the same class of error.
If it didn’t work
The crash returns immediately. Either you deleted a row from the wrong table or your tool fired another quick match between your DELETE and the restart. Stop the third-party tool first, then redo the DELETE, then start the server.
Multiple bookSeries rows for the same book are showing up. Run a one-off cleanup to find any pair that’s somehow not unique:
SELECT bookId, seriesId, COUNT(*) AS n
FROM bookSeries
GROUP BY bookId, seriesId
HAVING n > 1;
If anything comes back, those are extra rows that bypassed the constraint somehow (most likely from an older migration). Delete the duplicates, keeping the oldest by createdAt.
You don’t have access to the SQLite shell on the host. Use the container instead:
docker exec -it audiobookshelf sqlite3 /config/absdatabase.sqlite
If sqlite3 isn’t installed in the container, use apk add sqlite (Alpine) or copy the database file out to your host and edit it there with DB Browser for SQLite.
Affected versions
Reported on v2.34.0 in #5215. The same crash path was reported earlier on v2.30.0 in #4775. One commenter on #5215 says downgrading to v2.33.2 avoided the crash for them, so the trigger may be tied to changes in v2.34.0, but that’s a single data point. There’s no upstream fix yet. If you can reproduce it deterministically without any external API client touching the server, posting clean repro steps on #5215 helps.
Happy listening, Hemant