
Agreed, the term is sub-par. But hear me out, the architecture is not. Let's talk about a few common misconceptions about Serverless!

There *are* servers involved, but it's not on you to run and operate them. Instead, the serverless provider is managing the platform, scaling things up (and down) as needed. Less things to take care of, billed per-use.
Myth: BUSTED!

Pay-per-use makes low/medium-volume workloads really cheap. But pricing is complex: no. of requests, assigned RAM/CPU, API gateways, traffic etc. Depending on your workload (e.g. high, sustained), other options like VMs are better.
Myth: Kinda BUSTED!

Yes, classic web stacks on the JVM can take seconds to start: too long for cold starts with users waiting for a response. But don't despair; AOT compilation with #GraalVM (and Project Leyden soon) yields ms start-up times.
Myth: BUSTED!

Not if you do it right. Avoid provider-specific function APIs, use portable APIs like #Quarkus Funqy, REST APIs, e.g. with AWS Lambda Proxy, or container-based Serverless ( #Knative). It hardly gets more portable than Linux containers.
Myth: BUSTED!

True. That is, if you don't care about monitoring and incident response, keeping your dependencies and base images secure and up-to-date, budget control, right-sizing RAM and CPU, water-proof IAM, etc. In other words:
Myth: BUSTED!

Oftentimes, CPU is allocated proportionally to RAM. Even if you just need 128 MB RAM, assigning more may reduce latency by means of more CPU. This can put you into a lower billing bracket and thus be cheaper.
Myth: BUSTED!

j/k, I hope nobody thinks that. As always, context matters. Serverless architectures are great for some problems, a poor fit for others. Think for yourself, choose what makes most sense, no matter what's the hype-du-jour.
~Fin~