A fundamental component of our core philosophy is that many, if not most functionalities that traditionally require the use of a backend server running 24/7 can be provided using a combination of on-demand services hosted in the cloud.
Instead of requiring a dedicated or general purpose server that must be supported, maintained and which must be kept running at all times, we utilize, design and create API’s and microservices that can be called securely whenever needed.
In a traditional landscape, one or more servers host web services, applications and database services.
For small sites with low traffic, this model works fine and there are millions of websites globally that exist happily with static and dynamic content served reasonably quickly by the host server.
Constrained by the underlying host server performance, sites and applications hosted in this way can become overwhelmed when traffic is high. Denial of service attacks can render the site or application unavailable to genuine users, while operating system and software flaws pose potential security risks.
While additional layers of security (like web application firewalls) can help reduce impact from malicious attacks, filtering and blocking are less helpful when traffic is genuinely high.
Content delivery networks are great at ensuring static content is delivered quickly to visitors and users but won't help speed up access to dynamically created (or served) content and services.
In order to maintain performance, the traditional approach is to increase the scale and complexity of the landscape, adding load balancing servers, autoscaling and multi-region replication along with backup redundancy.
Operating overheads, maintenance costs and complexity begin to spiral in order to keep pace with straightforward fluctuations in traffic and multi-region growth.
With increasing scale, agility of on-premise solutions to changing business requirements decreases and risks from software and operating system defects are compounded. A move to cloud based hosting can, however, reduce operating costs significantly and may reduce security risks where operating system patches are applied by the hosting service.
With a serverless, microarchitecture approach, complexity is avoided without impacting security or the ability to scale and adapt to fluctuating traffic.
Static content is still delivered by a CDN for high availability but dynamic functionality is provided though stateless, event-triggered functions that are called into life, consume resources and provide computational power only when needed.
By chaining multiple functions together, along with use of lightweight databases and low cost static storage, entire applications can be built. Since serverless functions act as self-contained compute units, they can be invoked in parallel at huge scale to cater for high traffic.
The standard model
Traditional scalable models can be highly successful where fluctuations in traffic and workload are predictable and/or change gradually over a period of time.
Rather than rely on fixed provision of server capacity, in a cloud hosted environment, autoscaling can be used in combination with load balancing to adjust capacity dynamically in an attempt to reduce over or underprovision.
Incrementally adding or removing capacity, even though dynamically triggered in response to fluctuating traffic and workload takes a finite amount of time since server images must be provisioned and systems started. The overheads of the operating system just don't allow for an instantaneous change so a small amount of under or overcapacity remains unless a more aggressive stance is taken with minimum capacity and autoscaling at the expense of a greater degree of overcompensation. It's still a huge improvement over fixed provisioned capacity though.
Fast changing scenarios
Traffic and workloads are not always predictable though and actual workloads can be subject to very large variations over short periods of time. For globally available systems and services, traffic may be consistently inconsistent throughout the day or may be subject to sudden and sustained increases in traffic either through malicious attack or simply through popularity.
Under highly variable conditions, the ebb and flow of traffic may simply occur too quickly for autoscaling to respond effectively.
While overprovisioning of capacity results in higher than necessary running costs, this may be less of a concern than underprovisioning where end user experience may be compromised by poor responsiveness of websites and applications. Damage to business reputation through lack of adequate service may be worse in the long term than a short term overprovisioning.
Using serverless microarchitectures means we focus on the solution and its structure. Code required to provide the solution logic runs arbitrarily within the cloud hosting platform (in our case Amazon Web Services). No provisioning of specific servers within a geographic region is required since our code executes on a shared compute 'cloud', the huge server landscape that AWS makes available publicly.
Rather than having reserved compute power, the serverless model only uses resources when they're actually needed. Since the overall solution is divided into small, generally self-contained units of computing, the individual components can be called upon very quickly and with only a very small delay. Serverless solutions are therefore able to adapt rapidly to fluctuations in demand and always use only the bare minimum of compute power needed keeping costs to an absolute minimum at all times.
When demand is high, whether due to a general increase or as a result of sudden spikes in activity, serverless solutions are able to scale up quickly and effectively without limits by operating massively in parallel within the huge shared compute capacity available. Operating a serverless solution equates to having a superscale server farm available at almost no notice but only paying rent for the individual services and processes that are actually consumed at any given moment.
The economy of scale also represents a saving to the environment since shared platform resources can be used by others when they'd be otherwise idle. Less waste means less impact while Amazon itself is committed to reducing its environmental footprint by using renewable energy to power its global infrastructure. By using our serverless microarchitectures you're making an environmentally responsible choice.
Even though our serverless model means our API’s are always accessible on-demand globally via the web, the cloud hosting platform we use also allows us to bring them closer to both you and your users ensuring minimum latency by making them available from multiple locations around the world.
Although we're based in Brisbane, Australia, using the Amazon Web Services platform and the growing range of powerful tools AWS offers allows us to develop and deliver our cloud applications for businesses both large and small (and micro) wherever your business is or needs to be.
Our API’s can be used to add functionality to websites as part of our comprehensive web design services, as powerful additions to existing websites or as fully separate services that can be called from other applications to add functionality and computational power. Acting as tiny computational units they can be chained together like cogs in a larger machine to collectively provide much larger and more complex services.
Applications range from embedding or enabling user generated content within web pages, through to providing data capture, reporting, data processing, integrations with existing systems and much more.
Slow web page loading and performance can influence search engine rankings and avoiding functionality that reduces page performance should be considered as part of your SEO strategy.
Unlike some 3rd party plugins that embed remote scripts that can cause blocking and poor page responsiveness, our API’s operate only when needed. Where scripts are needed to exchange information with our API’s and process responses, these are embedded locally to avoid blocking and poor performance.
Naturally cost depends on complexity and scale but depending on the exact functionalities required we can provide low cost pay-per-use models for API usage, with an up-front agreed hourly or daily rate for custom API and app design and construction.
Where we determine and agree with you that we are able to reuse functionalities and API’s for other clients, securely and without impact to your business, our fees and pricing models are adjusted accordingly.
If you’d like to know more or would like to discuss your requirements, please contact us and we’ll be happy to have an initial discussion without obligation.