Supabase is a great tech stack for product development. Yet it over-promises or under-delivers in areas I consider essential as a startup tech founder. For this reason, I cannot fully recommend Supabase unless you are prepared and willing to work around the hurdles you will eventually encounter.
Much has been said about the great things that Supabase provides and how they work, so I want to talk about the other side of the coin. In this article, I share my development experience (DX) after using Supabase extensively for months (by now, I consider myself a power user of the stack). The 3 main goals of this Supabase review are:
- Lay out the facts thoroughly and go into the specifics as to why I have reached this conclusion (and give remedies to some of the issues). I hope it is an informative read despite its length.
- Give a raw and practical perspective to you, the developer, CTO, or tech enthusiast using or considering Supabase to build your product and make you aware of the shortcomings we have hit while building our own product so you can make an informed decision.
- Share candid feedback to the Supabase team about the current state of their product if it reaches them.
My intention is NOT :
- **** on any tech company or their product
For those who do not know, Supabase is an open-source, PostgreSQL-based alternative to Firebase, which aims to provide developers with a suite of tools that simplify the process of building applications. The company was founded in 2020, has had over 110,000 developer signups, and raised over 116M in funding. Much of what they’ve built and accomplished is superb, and they constantly improve the product.
On our side, at Hyperion, we are building a financial software platform for small businesses. As CTO of an early-stage startup, I aim to be as resourceful, fast, and cost-effective as possible in developing and delivering our products. Our small team was searching for a simple and clean tech stack that met these necessities. The result of our investigation led us to decide on Supabase since it was marketed to our primary needs:
- quick to get started and start experimenting
- built upon core technologies that we consider mature and had experience with (PostgreSQL, TypeScript)
- holistic (provides standard solutions to typical SAAS product requirements)
- provided an online dashboard
- has extensive documentation and guides
- has an active community for support and help
Supabase seemed like everything we ever wanted to get started with our product. We are developing our product using Supabase daily, actively participating in the GitHub + Discord discussions, and contributing some code to the code base.
However, the reality was that the journey was not super smooth. During this period (and sometimes currently), we have hit all the issues I describe in this article. That is when I felt the need to share my experience to give a different viewpoint to those interested in getting a complete perspective on the stack. I hope these issues are transitory and resolved so that the DX fulfills the promise of building faster and focusing on the product instead of struggling with the technology hindrances brought up by Supabase.
Let’s get started.
Our development process consists of each developer working on a development branch and local dev environment capable of isolating their work to avoid interfering with each other. When approved, changes are merged into an environment branch and deployed to said environment. This is a common development process, even for small teams.
And so my first issue with the Supabase stack did not even come up at the technical level but at a philosophical level: the Supabase docs are written chiefly to suggest that the standard way to develop and test your product is directly in a remote environment where all code is deployed to, and resources are shared. When I mention remote environment, I refer to the Supabase project hosted by the Supabase platform. This may be fine for a solo developer or a hobby project. However, this does not provide the stability and isolation needed to prevent our developers from stepping on each other’s toes. Moreover, setting up individual remote projects for each developer adds more complexities and overhead to our development.
Supabase has local dev tools to address this (my next point), but it is crucial to understand the implicit Supabase philosophy of the primacy of remote environments over local environments as it leaks into other, more specific issues. In my opinion, most teams want to do their development locally, and this implicit “remote-first” philosophy can mislead developers into reaching false conclusions about the stability and feature parity between remote and local, which will bring headaches during development if you are unaware of them.
Expanding on the previous point, Supabase provides a way to run the containerized Supabase stack locally using the Supabase CLI (our next issue). It is essential to understand that Supabase, under the hood, is a collection of multiple open-source technologies, some heavily customized, that have been containerized and integrated into this stack package and exposed via their API and clients. That includes database, authentication, secrets storage, an HTTP API, connection pooling, etc.
The problem is that the local and remote environment stacks are not at full parity feature-wise. Furthermore, even when a feature is present in both environments, they are not guaranteed to work equally. I will go in-depth into some later, but these include and are not limited to:
- Dashboard studio options and UI
- Vault features
- Storage features
- Project configuration options
- Some auxiliary stack services
- Auth features like email templates, SMS, etc
This is a pretty extensive list (without being comprehensive) of the things we have found that do not work in full parity between local and remote (and we have yet to try all the features). This causes unexpected issues when moving product features and code from local to remote or when one tests things locally following the docs, and they do not work as expected (or not at all) since the docs are written with remote in mind.
We have been hit several times with investing time into using a Supabase feature that does not work locally in a consistent manner or works differently on remote, making us lose valuable time. This creates distrust of the technology, and our threshold of giving a new feature or update the benefit of the doubt diminishes. We now take the docs with a grain of salt, having to revalidate what the docs say when using features locally. Hence, since we do our development locally, we’ve decided that we will only use features on remote if they work locally. Otherwise, we refrain until we’ve heard good testimonials from the community via Discord/GitHub and checked it is stable enough to use locally and remotely.
This is a crucial issue to be aware of as it affects many parts of the Supabase stack. The Supabase CLI is the primary dev tool to set up project configurations, run the local stack, and perform other auxiliary processes like migration management and type generation. The CLI is ever-evolving, adding new features and fixes, which is excellent. However, the issue is not that they make frequent changes and updates but how these changes are released. Different CLI versions can break things during the local dev and deployment process. Since we use npx, we need to specify the version on the supabase command npx, otherwise npx will always fetch the latest version.
# runs specific version (1.90.0)
# fetches and runs latest version
This forces us to “lockdown” or “pin” our CLI version across all our development and deployment processes to ensure consistency in the version used to develop. Many unexpected things can impact your development if you do not do this.
For example, your type generation could suddenly stop working or generate missing types because something in the CLI changed that would break that feature. Or a change was made to the CLI local stack startup process that breaks running the local stack altogether. Or changes to how database migrations are applied that break the local stack. All of these have happened to us, and we are always on the lookout when we need to bump the CLI version to get a new fix or product feature. One has to consciously monitor the issues when upgrading versions. This adds more overhead to developers and keeps you on edge about possibly breaking your working app at any given point due to an unexpected CLI version change.
I’ve given feedback to the Supabase developers about this issue, and they have taken steps to mitigate this by releasing less frequent stable versions, but there aren’t systematic tests done between CLI versions that prevent future breaking changes, and the process of fixing bugs is more reactive than preemptive. That said, I want to mention that the CLI dev team is pretty responsive and does their best to revert and fix these issues, but I cannot deny it impacts our development.
Since we use TypeScript, we use the JS (TS) Supabase client provided by Supabase. The client is an “all-in-one” package that lets you use the many Supabase services with a single client in your front end and backend code. It is used for user authentication, querying data, and running database and edge functions. The Supabase client simplifies data querying using the CLI to introspect (i.e., analyzing) your database schema and auto-generating the TypeScript types. These types allow you to build type-safe queries the client uses to fetch your data via the PostgREST API (the Supabase stack sets up for you). This means the querying syntax is based on PostgREST (the JS client leverages PostgREST heavily; more on this later).
When it works, it works like a charm, and it is a beautiful experience: you make database schema changes, generate types, and voilá, your queries are automatically updated to reflect the changes. Yet, when type generation does not work, it breaks the whole concept since you lose type safety in your code. You end up fragmenting your code when using the client between working and non-working queries, sprinkling your code with manual types, and having to insert @ts-ignore statements. This is hardly a great DX and introduces the potential for errors.
You may not encounter this issue if your use cases are elementary queries. But if you have multiple data tables and join relational tables, the likelihood of this becoming a problem increases. Some common type issues I have encountered center around querying relationships and nested queries. This, combined with the next point, is why we have moved away from using this prominent feature and just use the client primarily for authentication and calling edge functions.
Initially, we started using the JS client from our front end to fetch, insert, and update data directly. This is possible because the Supabase client provides an abstraction layer over PostgREST. PostgREST exposes an auto-generated API for clients to query your database data using HTTP requests. Unfortunately, this has a massive drawback for data-critical processes: the client does not support database transactions.
Transactions are not a niche, nice-to-have feature. They are the fundamental relational database mechanism used to make ACID operations possible. These allow developers to execute all-or-nothing logic to guarantee data integrity. The fact that the Supabase client does not support this prominent feature that other ORMs and query builders already provide relegates the client to very basic use cases. The feature has been requested multiple times (for example, here, here, here), and an official issue is petitioning for this feature. However, given the Supabase client is built upon PostgREST, I believe this would require a fundamental architecture change to provide transactions, so I do not think this will be available anytime soon.
The official, practical workaround to this limitation is implementing all the transactional logic directly in the SQL database using remote procedure calls (RPCs) in your PostgreSQL database and calling those RPCs using the client. I can tell you this is not an equivalent alternative to the great DX provided by the TypeSript client for various reasons:
- It fragments logic between domains (database, front end, backend) and languages (SQL, TypeScript)
- It prohibits code sharing and may even require implementing the same logic twice (for each different domain)
- Developing/debugging in SQL is extremely bad DX, especially for anyone coming from TypeScript and not used to SQL. You lose all the advantages of committing to Supabase to use the TypeScript goodies.
- Changing your logic would require migrations to ensure the changes are version-controlled and consistently deployed to your environments.
- One could argue business logic should live in the backend anyway (and I agree), but even if one were to build this logic in a backend API or edge functions, the Supabase client would be useless due to the lack of transactions.
For this and the type inconsistencies mentioned earlier, we abandoned using the Supabase client for data fetching and updating (one of its main attractive features). Instead, we are using Kysely for DB interfacing, and it has been the experience I expected natively from the Supabase client (and then some). It is simpler than managing a Frankenstein codebase of DB RPCs and TypeScript functions (which you can try to do, and we did until it got out of control and unmanageable).
- The docs give bare-bones, simple, and ideal examples of using Deno and Edge Functions and underplay the challenges of setting up these functions.
- Naturally, there is no support for NPM packages. Instead, one imports dependencies using import maps and CDN URIs to pull the dependencies from the remote network. Import maps can get cumbersome and finicky when importing dependencies.
- Some NPM packages you use may not exist for Deno. In other cases, the dependency may exist but uses a different API or functions from the Node versions, requiring you to implement an interface to standardize the usage of these libraries if you do code sharing.
- Supabase is extremely strict on the folder structure to run functions locally. You could be chasing red herrings when troubleshooting your code because of a misplaced file, and the Supabase CLI/stack does not indicate the root cause of the problems.
- Setting up and running edge functions for local development is not trivial for multiple functions and sharing code between them, let alone with the rest of your codebase (next point).
- While not impossible, sharing dependencies between Deno files and the rest of your TypeScript codebase (e.g., a React app) is VERY challenging. It requires careful setup and orchestration between package names, import aliases, and file extensions to have a compilable structure.
- Locally running functions give vague and non-descriptive errors when they fail due to compilation or container issues. One has to develop a systematic troubleshooting process to find the root cause of edge functions failing to run, consisting primarily of commenting code and adding console logs (I have found no way to debug Deno functions with steppers).
- Deno is not mature, especially compared to Node. It brings new paradigms that can take a toll on developers who want to move fast and are already accustomed to Node’s particularities.
Some of these issues are Deno-specific, but Supabase gives you no choice but to use Deno for Edge Functions, so it also becomes a problem for Supabase. Most of these issues would not have existed if Supabase had supported Node. I understand the rationale behind the decision, but it felt more like a decision driven by a technical bet rather than user demand. The safe choice would have been to start with Node support for Edge Functions and add Deno in parallel while the tech evolved, not to force Deno on developers.
We have been able to internally stabilize and make edge functions work consistently for us, but it was due to our relentless effort to push through every barrier imaginable to make it work. The experience we went through will be too much to bare for others. This, again, does not deliver on the promise to make Supabase an easy-to-use solution to get started fast.
The Supabase Vault is a solution to store encrypted data isolated from the primary database schema used by Supabase, and one can encrypt/decrypt values with their helper functions. We were excited about this particular feature but found a set of issues as soon as we used it locally.
One is that the vault feature is not accessible from the local dashboard like in the remote one (an example of the disparity between environments). Another issue we faced locally was when one creates secrets but cannot use them again, instead getting a cryptic error. This alone was enough to make us abandon this feature. In this case, we decided to implement our own solution. Unless the Vault provides some other functionality not worth the effort to build ourselves, we will unlikely return to their feature.
This feature lets you call HTTP endpoints or Edge Functions when data changes in your database tables (inserts, updates, deletes). This is done by setting up PostgreSQL triggers on the tables you want to react to. These triggers make an HTTP request from your database to the endpoint, sending the changed table data.
The issue we have found here is that when you set up these triggers, the endpoint URLs get hardcoded into the trigger parameters, making it complicated to set up and share across our development and remote environments. It is another instance where the “remote-first” approach to Supabase hinders local development and deployment to multiple environments. Once more, we had to implement our own webhook triggers solution that allowed us to dynamically set up these events.
The beauty of open source is you can actually leverage the community to work with you to solve some problems. While some of their repositories (like the CLI) are very active and responsive, other Supabase modules are given less attention. For such a well-funded company, I would expect they invest the resources to work with the community to quickly solve the problems, especially for their more robust and established feature offerings.
And here lies my final issue: I have noticed the trend of Supabase focusing more on releasing trendy features and marketing them rather than stabilizing their product and addressing common issues. It often feels like they want to show progress and attract users by launching new products before fixing the problems more advanced users struggle with from their current services. The reality is that when some of these new features launch, they work for specific and simple use cases, and once released, some feel almost abandoned. For example, I have made a very simple PR to contribute a fix for the Storage feature, but it has remained open and inactive for 2 months.
It is hard to commit to using a new feature, mainly when it has not fully materialized, and no progress is seen on community issues. Sometimes, the boring work pays more than building the new shiny toy people may have not even asked for.
There you have it. These are the main issues we have had and are enough to not fully recommend Supabase as a holistic solution yet. We have yet to venture into other more advanced features such as Vectors, Wrappers, Self-Hosting, Realtime, etc., but I expect some of the same themes to come up if we do. Other more manageable issues I didn’t go into include:
- Local/remote database migration management
- Bare-bones secrets management
- Handling authenticated sessions in your React App (client side and SSR)
- If you’re not a JS/TS user, support for any other language is pretty much non-existent
We still use Supabase because we have already set up our project here, with the parts that do work doing so well and conveniently! However, most of said parts could’ve also been set up using technology like AWS hosting, RDS, Node Lambdas, and the same open-source tools Supabase uses to build their platform, especially since we are substituting and customizing parts of what Supabase provides.
I was as specific and explicative as possible since the devil is in the details when you rely entirely on a stack to build your product and interact with the technology daily. I hope this helps the platform grow for the better and helps anyone considering Supabase make the best decision for them.
I eagerly await when these issues are a thing of the past because I have seen that A-ha! moment when Supabase gets it right. I am sure it is only a matter of time, and if you decide to build your great product with Supabase, this arms you with the knowledge and realistic expectations of what that will entail. Happy coding!