Posted on October 14, 2021 | Guille

The Client-Server architecture that has become ubiquitous in modern applications has had some unintended consequences that go against much of the original philosophy of an open Internet and free software. Large tech companies have taken control of users' data and locked them into their ecosystems, converting the Internet into an oligopoly of centralized services. One small step towards fixing this would be to separate data storage and limit server computation to the bare minimum.

What Is Client-Server?

Client-server is a model for structuring an application in two parts that share the workload: one centralized server that provides a resource or service, and one or more clients that request said resource or service. For example, a web browser is a client that requests resources (HTML, JS, and CSS) from web servers. In this case, the server is in charge of doing any necessary processing to generate the resources (for example, loading data from a database), while the client simply presents the content to the user. A mobile app is another example of a client, which usually connects to an API server to receive and process data.

What’s Wrong With This Model?

The client-server model has been in use since the 1960s, and has grown to dominate the application architecture field in hand with the growth of the Internet and the WWW. There are several problems with this model that are becoming ever more apparent now that it has become the de facto standard:

  1. Opaqueness: open-source software provides the best guarantee that an application does exactly what it says it does and nothing more. It is also the best way of auditing security and data privacy since it allows both black-box and white-box testing of the application. In a client-server application, even if both parts of the system are open-source, only if the server is self-hosted can a user be certain that the code that is running is in fact the code that was seen and analyzed. Server code is usually hosted by the service provider, so there is no way of knowing if it is indeed the code that is available for inspection (e.g., Signal).

  2. Incompatibility: open standards are used to guarantee interoperability between different servers and clients: HTTP for web, IMAP for e-mail, etc. Even in these cases, implementations have slight variations that hinder interoperability. Sometimes it’s just because of different interpretations of the standard, other times it is because one player wants to advance beyond the capabilities of the protocol (for example, QUIC for web and JMAP for email), and in some cases it is a mechanism for vendor lock-in. When both the client and the server are controlled by the same provider there is no need for open standards, so there is no way of switching either piece of the application or making a compatible substitute (except for reverse engineering).

  3. Data-Function Tie-in: servers usually provide both data management (creating, reading, updating, and deleting) and data processing. By combining these two functions (storage and compute) there is no way of distributing them to different entities. This means that vendor lock-in for the service automatically implies lock-in for the data.

  4. Data Mismanagement: since all user data is managed behind closed doors the user doesn’t know if it’s safe, what it’s being used for, or even it’s extent and persistence in time. There is no way of knowing if there has been a breach that has compromised the data, whether it has been shared (or sold) to a third-party, or how detailed and old the information might be.

What Is Client-Storage(-Server)?

Client-Storage(-Server) is a new proposal for structuring applications moving forward, where the storage component is detached from the server, and as much computation as possible is moved to the client side (in many cases removing the need for a server altogether).

Why Move to Client-Storage(-Server)?

While originally clients were “thin” (low computation) and offloaded all business logic and processing to the server, nowadays clients execute much more code thanks to the increase in processing power of PCs and smartphones. It used to be that a web server would have to run a lot of code to create the ouput HTML, but today most web applications are distributed as static files with large client-side JS codebases that contact an API server to retrieve and process the necessary data. In many cases the processing is also done in the front-end, making it a Client-Storage application hidden behind a Client-Server architecture. If instead of accessing the data through an API it were accessed through a standard data access protocol (WebDAV) there would be several benefits:

  1. Transparency: by having the entire codebase of the application in the client, if it is open-source there are no parts of the application that could contain unaudited code. A user could always know exactly what code is being executed end-to-end. If you combine this with encrypting the data stored in the server, there is no way anyone can misuse the data in any way.

  2. Interoperability: only one simple CRUD protocol is necessary to read and write data to the storage server. Even if each cloud storage uses its own protocol, it is trivial to add a client-side translation layer for each of these protocols.

  3. Separation of Concerns: data storage is isolated from processing, so that each area can be controlled individually. If there is a need for server-side processing, it is a separate system that receives the necessary data from the client (previously retrieved from storage). This should foster more competition, at least in the storage area, and possibly in all three areas (storage, clients, and servers).

  4. Data Privacy & Security: storing many users' data on one server is a great incentive for hackers to attack the system, and when that occurs everyone’s data becomes compromised. By having each user store their data wherever they want, it is harder to carry out a massive breach. Also, by sending only the bare minimum necessary data to the server and not storing it there, users maintain control over where their data lives and how it is used, and service providers reduce their liability to GDPR and other data protection law infractions.

Conclusion

There are many examples of applications that require some sort of centralized server to coordinate actions, but there are countless others where this is unnecessary. For example, a note-taking app doesn’t require any server-side processing, only the storage of the notes. That is why Ideotec created WDNotes, a note-taking app that stores notes in a WebDAV server and doesn’t require any service provider. Another example is a location-tracking app. Google Maps Timeline stores all the locations you’ve visited so that you can look back and see where you were on a particular day or the places you’ve visited in the past year. It’s a useful product, but it unnecessarily shares very private personal information with Google (known for mismanaging user data for profit). Ideotec is developing WDLocation in the hopes of having a similar product without the privacy nightmare, by storing all location data on a WebDAV server. Other examples of applications that only require storage and client-side processing are contacts and calendar apps, but these even have their own standard extensions to WebDAV (CardDAV and CalDAV, respectively) and have many implementations available.

Tags: