Skip to content
Resider Architecture for Bob Laikin And Co.
Share
Explore

Security and Architecture Overview

Prepared for Bob Laikin and co.
Tech investments
A major tech investment we just completed was migrating our codebase to NextJs. Previously, we were implementing advanced functionality manually. With this came more complicated code, and a heavier maintenance burden.
Now, we are delegating much of these advanced features to NextJs. The result is that our code is magnitudes simpler to maintain, more performant, and more secure than any of our previous manual implementations.
Security
Resider is built using
NextJs is open-source and rigorously tested for security vulnerability. See their security practices here:
Beyond core security, they intentionally make it difficult for developers to implement bad-practices. For example, their image optimization component does accept SVG images by default. If a developer wants to override this default and optimize SVGs (in some cases it can be safe), the developer must explicitly toggle a setting that is poignantly titled “dangerouslyAllowsSVG”. In this way, their framework ships with secure-by-default settings, which cannot be accidentally disabled.
High-level code repositories
Resider maintains 2 high-level repositories:
Resider-Parser: a headless data application used to ingest data from property management softwares
Resider-PWA: a performant web application (the one you see when visiting )
Resider-parser
Built primarily with Gitlab CI/CD.
Gitlab is also an industry leader for security. Like NextJs, they hold the highest security certifications. Additionally, they extend their security expertise to users, by automatically scanning our code for known vulnerabilities.
The way we use Gitlab CI/CD is novel.
Typically, CI/CD workflows are used when pushing new features to production. Gitlab will compile the code in an isolated container, run tests, and other optimizations – all driven by .yaml config files. Basically a way to orchestrate hardware in the cloud. We are using this same automated process to ingest data from property management softwares.
It works like this: If a property enabled the 'Resider' syndication checkbox in their Property management software (i.e. Yardi, Realpage, and Entrata), then the property management software provider will upload a .xml file property information such as pricing and availability to a file on a server daily. Only a resider machine with a specific IP address is allowed to connect (specifically, we are using a google cloud compute VM, with a fixed IP address). So everyday, our gitlab CI/CD process will initialize a virtual machine, instantiate an SFTP client to securely connect property file servers, and download the .xml files for each property. Currently, the program is downloading hundreds of files from each provider daily. Once downloaded, the data must be parsed (to extract only the relevant nodes) and optionally saved to our database (only if there are relevant changes).
Resider-pwa
NextJS is a full-stack framework. In addition to hosting our frontend website files (the ones download when navigating to ), they also provision backend infrastructure. Under the hood, they have major contracts with Amazon Web Services, and make it delightfully simple to use.
We are using 3 primary cloud products:
Hosting on the Vercel content delivery edge network (Vercel owns Next.JS)
“Serverless” cloud functions
MySQL database server
1. Hosting static files on Vercel’s edge network
The is simply a friendly url address that points to a public computer on Vercel’s edge network. When this server receives the request, it will return the initial HTML, CSS, Javascript files that anyone sees when navigating to that URL.
When the downloaded javascript file is parsed and executed, the javascript code will make more requests to our cloud servers at different url addresses. Each url address points to a different file on one of our cloud servers. Each file has slightly different settings. For example, the server at this url address does not check for authorization, because its job is to return publicly accessible data. If you click it, you’ll see a JSON blob containing structured property data that is shown to users in the form of beautiful cards and map markers. However, if you are not signed into a Resider account, this other API route will return nothing . Because its job is to get properties that you or a member of your search group has saved.
Each API route linked above is implemented as a serverless “cloud function”. Take note of the word “edge” appended to each of those server urls above, it’s how we’re achieving millisecond database fetching – explained more below.
2. Serverless Cloud functions
“Serverless” is a misnomer. There is still a server involved (i.e. a computer in a rack in a Google or Amazon managed data center). It’s just not managed by us (we don’t have to upgrade it’s operating systems or install security patches – Google or Amazon does that for us).
This serverless approach as two primary benefits:
Rather than paying to keep an over-or-under powered server running 24/7 (even if we have minimal traffic at 3am), they operate ephemerally. Meaning anytime a user makes a request, google or amazon will initialize a new instance on one of their computers to execute our code. So we only pay for the time the computer is executing code (we don't even pay for the time the computer is waiting for a response from our database!)
Of equal importance, cloud functions are (nearly) infinitely scalable. We do not have to constantly monitor supply/demand to ensure our servers are powerful enough. If we get a major spike in traffic, Google/Amazon will simply initialize more instances on their servers. They are extremely efficient at this process.
However, one tradeoff to traditional serverless is known as “cold starts”. (spoiler, they are solved via “edge servers”).
Click the arrow to expand each section below:
Traditional cloud servers. Powerful node.js runtime, but are slower to boot ~1.5 seconds.
These types of servers exist in only a handful of Amazons or Google data centers around the country. This means that it will take longer for a users request to physically travel through wires to this warehouse and back. But, these machines are the most powerful and can execute heavy tasks. Besides larger hardware, their increased processing abilities is to due a powerful code runtime called "node.js". The secondary problem is that node.js and all it's dependencies, takes time to install (around 1.5 seconds). So a slow bootup time paired with long round trip wire travel equates to latency for a user. Around 3 seconds for a network request.
As a side note, there are ways to increased the “perceived” network response times, such as showing animated loading “skeletons”, low quality blur image placeholders, or by implementing a local network-cache layer in your browser – all of which we do.
So we use these types of traditional, powerful but slower cloud functions only to process heavy computations, in scenarios where a user expects some wait time – for example, when scheduling a tour.
Edge cloud servers. Limited V8 runtime, but near instant processing.
Edge servers have existed for many years as "content delivery networks", however, companies like CloudFlare and NextJs are now offering their compute for more than just routing requests (how they were typically used) – and allow us to execute code! These servers are scattered around the country, not just in a handful major data centers (i.e. the “edge” of the network; you could have one down the street!). The hardware is smaller, and the V8 runtime is limited (it cannot install many common packages). However the V8 runtime does support some important standard Web APIs, such as "fetch", which allows securely fetching data from our database. So, for Resider cloud functions that only need to fetch some data, we have moved them all to 'edge functions' – and they are fast! (responses in the milliseconds) Even more, these edge servers can cache data. So if a user in your region requests the same data twice, the server will skip a fetch request to the database altogether, and instead return the data it saved from the last time it handled the same request!
3. MySQL server
finally, when these secure cloud servers do request code from our database, that is handled by a MySQL server, managed by Google Cloud Services.


The technical team
Michael McGreal is the lead developer, supported by a team of freelance expert React Typescript engineers from Argentina, named “Async Development”. The core support lead is Nicholas Cisco, whose personal blog and brilliant technical writing can be found
.
Because the app is built using NextJS, which manages most cloud and security features for us, our core development team will always remain relatively small compared to other companies. However, we are always seeking new expert developers to add to our freelance network, and eventually onboard full-time when needed.

Want to print your doc?
This is not the way.
Try clicking the ⋯ next to your doc name or using a keyboard shortcut (
CtrlP
) instead.